Cybersecurity researchers today disclosed details of a memory vulnerability in IBM’s Db2 family of data management products that could potentially allow a local attacker to access sensitive data and even cause a denial of service attacks.
The flaw (CVE-2020-4414 ), which impacts IBM Db2 V9.7, V10.1, V10.5, V11.1, and V11.5 editions on all platforms , is caused by improper usage shared memory, thereby granting a bad actor to perform unauthorized actions on the system.
By sending a specially crafted request, an attacker could exploit this vulnerability to obtain sensitive information or cause a denial of service, according to Trustwave SpiderLabs security and research team, which discovered the issue.
“Developers forgot to put explicit memory protections around the shared memory used by the Db2 trace facility,” SpiderLabs’s Martin Rakhmanov said. “This allows any local users read and write access to that memory area. In turn, this allows accessing critically sensitive data as well as the ability to change how the trace subsystem functions, resulting in a denial of service condition in the database.”
IBM released a patch on June 30 to remediate the vulnerability.
If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.
If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.
In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.
#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition
What exactly is Big Data? Big Data is nothing but large and complex data sets, which can be both structured and unstructured. Its concept encompasses the infrastructures, technologies, and Big Data Tools created to manage this large amount of information.
To fulfill the need to achieve high-performance, Big Data Analytics tools play a vital role. Further, various Big Data tools and frameworks are responsible for retrieving meaningful information from a huge set of data.
The most important as well as popular Big Data Analytics Open Source Tools which are used in 2020 are as follows:
#big data engineering #top 10 big data tools for data management and analytics #big data tools for data management and analytics #tools for data management #analytics #top big data tools for data management and analytics
The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.
This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.
As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).
This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.
#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management
Lots of people have increasing volumes of data and are trying to run data management programs to better sort it. Interestingly, people’s problems are pretty much the same throughout different sectors of any industry, and data management helps them configure solutions.
The fundamentals of enterprise data management (EDM), which one uses to tackle these kinds of initiatives, are the same whether one is in the health sector, a telco travel company, or a government agency, and more! Therefore, the fundamental practices that one needs to follow to manage data are similar from one industry to another.
For example, suppose you’re about to set off and design a program. In this case, it may be your integration platform project or your big warehouse project; however, the principles for designing that program of work is pretty much the same regardless of the actual details of the project.
#big data #bigdata #big data analytics #data management #data modeling #data governance #enterprise data #enterprise data management #edm
The past tumultuous year saw many organisations caught up in a cloud frenzy, heavily investing in data lakes and data warehouses to get the most out of their precious data in a more flexible, scalable and efficient way. But despite the investments, we saw the data management was still a big challenge for companies in 2020. The majority of them still struggled with legacy systems, lack of domain-specific skills, lack of clearly defined data governance policies and poor data quality, preventing them from thriving in their data and AI innovations. It was clear that the new circumstances required enterprises to review their data management strategies and design platforms to support their AI-driven ambitions.
Nevertheless, 2021 carries more positive outlooks for data management. G2 Research Hub forecasts that in 2021, data-driven leaders will be reassessing their data management strategies due to the evolving technology environment. Also, organisations will invest in scalable data platforms to effectively secure, govern, and analyse data across business functions through a single unified platform. These modern data platforms will provide seamless access to their data, irrespective of where it resides, helping companies gain valuable insights and make better business decisions. Research trends state that data management capabilities are enforced with AI and ML to handle the ever-evolving complexities like data diversity and disparity across environments. Another important transformation in data management is the blurring line between IT and business responsibilities. Organisations will eliminate functional boundaries, enabling enterprise-wide data collaboration and empowering stakeholders across the organisation with the right data at the right time.
These are some of the trends, strategies and methodologies that will be discussed at the upcoming Data 2030 Summit in February. Eager to learn more about the developments in data management that we can expect in the new year and beyond, we’ve invited some of the Data 2030 Summit speakers to share how they see specific data management components developing in the near future. Here is a roundup of experts’ data management predictions for 2021 and beyond
#big data & cloud #data management #scalable data platforms #unified platform #experts' data management predictions for 2021 #data management trends