Metadata Management in Big Data Systems: A Complete Guide

Metadata Management in Big Data Systems: A Complete Guide

Metadata management is one of the major components of any metadata initiative. Some organizations have a beguiling time when trying to incorporate metadata into their metadata management process.

Originally published by Terence Nero  at cuelogic.com

What Is Meta Data?

Among the various classifications of data that are seen in modern data science procedures, meta data is the type that tells users about the data itself. Users may be familiar with the DESCRIBE function in SQL that condenses information about the data types, data lengths and entries.

Source:- https://medium.com

Similarly, service meshes like Istio allow users to dig deeper into the relational databases using a set of meta-tags which may seem the same as those used in websites and web content.

Source:- https://www.twistlock.com

These tags and indexes help users to know details about the data such as:-

  • The titles and descriptions of the datatype.
  • Summarised information about the dataset such as the number of entries, maximum, minimum values, number of attributes and such.
  • The tags and categories in which the data type can be placed- such as contextual, financial, relational and many more.
  • When was the entry created and who inserted it. Details about the modifications are also stored as to who last modified the entries and when.
  • Meta data also provides information about the access controls for the meshes and also lays out rules as to who can update it.

The service mesh architecture uses an Envoy sidecar to deploy data storage for all entries whether from a stagnant server or from an online source where data is frequently updated.

The following image shows the extracted information of a typical service mesh that describes all that new users and old would need to work with the data.

Source: https://kublr.com

A metadata management strategy is central in ensuring that data is well interpreted and can be leveraged to bring results. Such metadata management strategies include collection, storage, processing, and cleaning. Likely, metadata management jobs have risen through the years.

Understanding Metadata Management for Big Data

The metadata management process is one of the most blazing themes in our industry as Global 2000 organizations and extensive government offices are starting to comprehend that without exact, convenient, and surely known metadata system, they can’t understand the advantages of cutting-edge research, enormous data, versatile examination, metadata management data warehouse, and the tremendous repository of data openings from the web of things (IoT).

  • The act of metadata management is central to each part of data management. Envision attempting to manufacture feasible data management without metadata management. It just cannot be done.
  • Metadata analysts invest a large portion of their energy working with metadata and a little measure of time on metadata.
  • Without appropriate metadata management, these stewards would be constrained to working with just Sharepoint, Excel spreadsheets, Word archives, and a group of non-computerized procedures to achieve their essential assignments.

Good data management in big data needs good metadata management. A well-developed metadata management system needs mechanized and precise metadata management frameworks , metadata development, metadata stores, and brilliant records in metadata innovation (IT) condition.

The Metadata Management Management Association (DAMA) effectively expresses that each part of big business metadata management has profound associations with an innumerable number of companies and flourishing industries.

Decoding the Management for Metadata

In case you’re in the field of metadata management then you’d be familiar with metadata being called the ‘data of data.’ There are many prescribed procedures and phrasing that should be comprehended to work in this profession successfully. The fundamental accepted procedures of metadata management are in some ways tied to its definition.

  • The exemplary meaning of metadata is “data about data.” Unfortunately, this definition is restricting as metadata is about substantially more.
  • Metadata is a sort of data that carefully portrays the who, what, when, where, why, and how of an association’s data, forms, applications, resources, business ideas, or potentially different things of interest.

All the more essentially, metadata gives the setting to the substance of all excellent data resources.

From this definition, we can see that metadata is a kind of data. Like data, metadata is an arrangement of digitized systems, widgets or data that gives learning aspects to it. This learning hopes to answer the who, what, when, where, why, and how. The 5 Ws and 1 H

The 4 Characteristics of Any Metadata Management Model

Incredible metadata management has four essential qualities. It is bland, coordinated, present and recorded.

  • Non-Specificity

o   Non-specific implies that the physical metadata shows hope to store metadata by metadata branch of knowledge rather than being application-particular.

o   The issue with application-particular metamodels is that metadata branches of knowledge extend their degree and can even change after some time. To come back to the precedent, today Oracle might be the database standard.

o   Tomorrow the rule might change to SQL Server for cost or similarity points of interest. This circumstance would make unnecessary extra changes the change to the physical meta show. Further, we ought not to have application-particular names into meta display like ACCT REC (i.e., Records Receivable).

o   It has inputs (Metadata coming in), procedures and yields (Metadata turning out) like some other framework.

o   Accordingly, there is no motivation to have our meta show have application-particular names for our properties or tables as this is constraining and a poor meta demonstrating practice.

  • Incorporated Perspective

o   A metadata frame gives a coordinated perspective of the venture’s real metadata branches of knowledge. Assume that you require a data frame with business definitions for the metadata components and catches specialized metadata ancestry.

o   Meta modelers wrongly put the business metadata (descriptions) in a different arrangement of tables and the specialized metadata in an alternate method of tables with no connections.

o   Subsequently, if the business is thinking about including another “client compositions,” the metadata group can’t inquire the metadata heredity related data in the model to perceive what metadata components would be affected by this business choice. This severely restricts the power that metadata management can give.

o   The best routine with regards to having an incorporated meta demonstrate is missed by most by far of associations as they executed numerous littler metadata management arrangements, instead of an undertaking wide metadata management exertion.

  • Predictive

o   A generally solid meta display contains metadata that identifies with both the present condition and the future/arranged condition.

o   Metadata management is hugely significant in comprehension and dealing with our current business and specialized scene; in any case, it can likewise assume a focal job in our association’s tentative arrangements.

  • Chronicled And Timed

o   Ultimately, metadata models are authentic as a decent meta-model will incorporate verifiable perspectives of the metadata, even as it changes after some time. This enables a partnership to see how their business has developed throughout the years.

o   This is mainly basic if the MME is supporting an application that contains authentic metadata, similar to a metadata distribution center or a progressed investigation application.

o   An in a general sense sound meta show stores the two definitions since they have legitimacy, contingent upon what metadata you are breaking down (and the age of that metadata).

Features Of Good Metadata Tools

There should be robust tools to help users access metadata and enforce all the rules defined by executives. Some of the features these features include:-

  • Test Data

o   Understanding and casting a preliminary analysis of a larger metadata management tool which has a data frame is best done with some test information that summaries the overall structure and content of the data.

  • Information Stats (Profiles)

o   Details give answers to some basic inquiries like a check, particular qualities, top utilized qualities, invalid tally, greatest and yeast qualities.

  • Heredity

o   Heredity causes you to comprehend the start of information, and how it voyaged and what are the different changes that occurred before it spanned to you. Further, it likewise empowers you to acknowledge what another place this information is being utilized.

  • Past Communication

o   Correspondence in the way to compelling metadata administration, so it’s essential to tie all the discussion identified with metadata in one place. Likewise, every one of the remarks and comments with respect to that metadata ought to similarly be accessible here.

  • Association with Other Metadata

o   For MDM instrument It is urgent to discover a relationship among information with the goal that information look winds up conceivable. There are different approaches to accomplish this – manual, human curation, consequently through metadata semantic coordinating or naturally through information coordinating.

Some Metadata Management Tools

A majority of metadata management associates and companies use big data solutions tools mainly for metadata management data warehousing. The role of metadata management in data warehousing is quite crucial to maintaining the integrity of metadata.

  • Informatica

o   Its metadata management solutions are the Metadata Manager, Business Glossary, Axon and Enterprise Information Catalog.

o   But the challenge in front of this company is to quickly demonstrate the ability to bring the acquisition of Diaku’s Axon into a set of metadata management solutions functioning as a seamlessly integrated solution.

  • OvalEdge

o   OvalEdge is a comprehensive metadata management tool along with ETL. As per its customers, it provides the state of art UI which makes collaboration efficient.

o   It has a patent pending relationship algorithm which finds all the relationships amongst data. To facilitate compliance, it has a provision to predefine rules and procedures at the very core.

  • Alation

o   sIts metadata management solution is the Alation Data Catalog. Despite being small, they have ample brand recognition in the market and have gained some traction with their data catalog. But their core metadata management functionalities such as data lineage and impact analysis are very limited.

  • Amazon Web Services

o   Metadata management in AWS has been hailed as a streamlining procedure that significantly reduces the time needed to synergize large datasets

o   Delivery companies and metadata management warehouse corporations too have been executing metadata management in AWS

  • Collibra

o   Collibra has Collibra Connect for metadata management tools, with a use case of data governance use case and support of regulatory requirements.

o   But customers have given a wide range of mixed reviews to Collibra for impact analysis, lineage and semantic frameworks.

  • SAP HANA/VORA

o   A great tool employed for managing large datasets with stable architectures composed in cloud settings that use Java, Scala, Python and a ton of other software in delivering comprehensive metadata management tools.

Source:- https://wiki.scn.sap.com

o    SAP also creates extensible products that can track the flow, spread and the entire workflow of the data from source to sink.

  • Spreadsheets

o   A standard tool for storing data, Macros and Visual Basic when combined with Spreadsheets have been used and are useful for conducting experimentation on the metadata that companies generate.

What are the types Of Metadata?

  • Metadata Repository

o   This is the business’ first far-reaching term to allude to the metadata management framework. The term alludes to the meta dataframe and normally anmanagement programming bundle that may have been bought. It is one of the most important segments of the MME.

  • Specialized Metadata

o   Specialized metadata gives the engineers, DBA (metadatabase directors), specialized clients, and other IT staff individuals the data they have to keep up, develop, and viable deal with an association’s IT condition.

o   Specialized metadata is totally basic for the progressing upkeep and development of the distribution center. Without specialized metadata, the undertaking of examining and actualizing changes to a choice emotionally supportive network is fundamentally more troublesome and tedious.

o   This includes – column structure of a database table, header rows of a CSV file and files created as JSON, XML or Avro files.

  • Business Metadata

o   Business Metadata includes security levels, privacy levels, and acronym levels.

o   Both IT and business need quality metadata to understand the information on hand. Without useful business metadata being available, the organization is ripe for making riskful decisions from faulty data.

How To Implement Best Practices?

  • Start From The Top

o   Metadata was most likely a confined corporate instrument before. In any case, associations separate and distribute their stores of data and the information is shared over a few divisions and lines of business.

o   It’s inevitably critical to make an institutional metadata administration process and scientific categorization for your whole business with an eye toward wiping out little use contrast between offices.

o   On the off chance that that sounds bureaucratic, well, perhaps it is – however it’s the sort of move up-your-sleeves exertion that is at last justified regardless of the agony.

o   This best down methodology implies parsing information as indicated by how it’s utilized by the whole organization, among divisions and working together with unstructured outside information. Intra-department types ought to be tended to, and custom metadata management use cases dispensed with or supplanted.

  • Get Everyone Together

o   Another recommended metadata management best practices are to bring together all team members and make sure to store together metadata stores that can be accessed to by all the real stakeholders in your enormous list of data contacts. The pattern nowadays is toward cloud-based metadata stores, which significant cloud sellers can give.

o   Better yet, user management and sharing tools to ensure that no one is left out and everyone has something to add and take from the mix.

  • Let Everyone Take Control

o   To accomplish a level of understanding between the different divisions, it’s insufficient to issue a decree from the peak. It’s essential to accumulate the general population who really utilize the terms in a similar space to hash things out.

o   They have to clarify how and why they use a specific information depiction. Unobtrusive employments of metadata go back to the days when each corporate and government officials was loaded up with maverick Microsoft Access databases, which were worked to evade an exhausted IT office.

o   Before the appearance of enormous information, the general population in the trenches developed smart metadata management use cases. Make sure to welcome those fearless warriors to the gathering.

  • Plan for changes and updates

o   A stable institutional metadata store will be utilized vigorously and motivate new uses and advancements for existing procedures. Fully expecting that, plan a process for the simple accommodation of new thoughts, careful assessment of legitimacy and fast arrangement when vital.

  • Keep in mind your accomplices

o   Keep in mind that you’re progressively sharing your information and in this way opening your metadata management frameworks to accomplice organizations, which are doubtlessly doing every one of the things you’re doing with a metadata administration procedure to deal with your gathered data.

o   Consider any cover with your accomplices and how they characterize the information that the two gatherings think about essential. Those discussions are in any event as necessary as the ones you have in-house.

o   All around overlooked metadata and highly ignored big data are indivisible. Completing a complex and critical activity with anyone requires completing an extraordinary event with both. Perfect and highly characterized metadata has a significant effect in conveying excellent business insight results.

  • Computerize Metadata Retrieval

o    Ideally, you need to mechanize the catch of big data streams metadata upon information ingestion and make repeatable and stable ingestion forms.

o   An information lake administration stage can consequently create metadata in light of intakes by bringing in Avro, JSON, or XML documents, or when information from social databases is ingested into the information lake.

o   Mechanization is fundamental for building adaptable engineering, one that will develop with your business after some time.

Concluding Terms- The Future of Metadata Management

Metadata has seen a tremendous shift in its position as the most critical component of the application requirements of modern information systems. Most modern systems are web-based, either within the organization (Intranet) or the public.

In the latter case, especially, metadata is the gateway to improving communication between heterogeneous information systems and creating entry points between user client workstations and the information servers.

  • Metadata management thus will see a constant rise in being the staple data source for electronic businesses between information systems.
  • Businesses will learn to separate the primary information resources from data and processes (metadata system) providing access to those resources.
  • The technology, however, has predicted limitations varying from the need to develop a technology that replaces a CMOS for processors through the use of more efficient storage devices.
  • Better refined queries with better-constructed databases will dominate the need for parallelism of algorithms acting on data resources. As a result quality metadata will be the basis for the solutions.
  • Metadata will thus become a logical “map” by which unanticipated or unknown future users can navigate through the information and data. It will also become the breakdown for auditors to review your system and even do a post-breach damage assessment.

Metadata management thus holds the light to safer management practices in the future where companies may be marred by leaky data or incorrect instances.

It will thus be a beacon to enable e-discovery and a way to appropriate data security and information privacy.


Originally published by Terence Nero  at cuelogic.com


============================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ Jupyter Notebook for Data Science

☞ Data Science, Deep Learning, & Machine Learning with Python

☞ Deep Learning A-Z™: Hands-On Artificial Neural Networks

☞ Machine Learning A-Z™: Hands-On Python & R In Data Science

☞ Python for Data Science and Machine Learning Bootcamp

☞ Machine Learning, Data Science and Deep Learning with Python

Data Science vs Data Analytics vs Big Data

Data Science vs Data Analytics vs Big Data

When we talk about data processing, Data Science vs Big Data vs Data Analytics are the terms that one might think of and there has always been a confusion between them. In this article on Data science vs Big Data vs Data Analytics, I will understand the similarities and differences between them

When we talk about data processing, Data Science vs Big Data vs Data Analytics are the terms that one might think of and there has always been a confusion between them. In this article on Data science vs Big Data vs Data Analytics, I will understand the similarities and differences between them

We live in a data-driven world. In fact, the amount of digital data that exists is growing at a rapid rate, doubling every two years, and changing the way we live. Now that Hadoop and other frameworks have resolved the problem of storage, the main focus on data has shifted to processing this huge amount of data. When we talk about data processing, Data Science vs Big Data vs Data Analytics are the terms that one might think of and there has always been a confusion between them.

In this article on Data Science vs Data Analytics vs Big Data, I will be covering the following topics in order to make you understand the similarities and differences between them.
Introduction to Data Science, Big Data & Data AnalyticsWhat does Data Scientist, Big Data Professional & Data Analyst do?Skill-set required to become Data Scientist, Big Data Professional & Data AnalystWhat is a Salary Prospect?Real time Use-case## Introduction to Data Science, Big Data, & Data Analytics

Let’s begin by understanding the terms Data Science vs Big Data vs Data Analytics.

What Is Data Science?

Data Science is a blend of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data.

[Source: gfycat.com]

It also involves solving a problem in various ways to arrive at the solution and on the other hand, it involves to design and construct new processes for data modeling and production using various prototypes, algorithms, predictive models, and custom analysis.

What is Big Data?

Big Data refers to the large amounts of data which is pouring in from various data sources and has different formats. It is something that can be used to analyze the insights which can lead to better decisions and strategic business moves.

[Source: gfycat.com]

What is Data Analytics?

Data Analytics is the science of examining raw data with the purpose of drawing conclusions about that information. It is all about discovering useful information from the data to support decision-making. This process involves inspecting, cleansing, transforming & modeling data.

[Source: ibm.com]

What Does Data Scientist, Big Data Professional & Data Analyst Do?

What does a Data Scientist do?

Data Scientists perform an exploratory analysis to discover insights from the data. They also use various advanced machine learning algorithms to identify the occurrence of a particular event in the future. This involves identifying hidden patterns, unknown correlations, market trends and other useful business information.

Roles of Data Scientist

What do Big Data Professionals do?

The responsibilities of big data professional lies around dealing with huge amount of heterogeneous data, which is gathered from various sources coming in at a high velocity.

Roles of Big Data Professiona

Big data professionals describe the structure and behavior of a big data solution and how it can be delivered using big data technologies such as Hadoop, Spark, Kafka etc. based on requirements.

What does a Data Analyst do?

Data analysts translate numbers into plain English. Every business collects data, like sales figures, market research, logistics, or transportation costs. A data analyst’s job is to take that data and use it to help companies to make better business decisions.

Roles of Data Analyst

Skill-Set Required To Become Data Scientist, Big Data Professional, & Data Analyst

What Is The Salary Prospect?

The below figure shows the average salary structure of **Data Scientist, Big Data Specialist, **and Data Analyst.

A Scenario Illustrating The Use Of Data Science vs Big Data vs Data Analytics.

Now, let’s try to understand how can we garner benefits by combining all three of them together.

Let’s take an example of Netflix and see how they join forces in achieving the goal.

First, let’s understand the role of* Big Data Professional* in Netflix example.

Netflix generates a huge amount of unstructured data in forms of text, audio, video files and many more. If we try to process this dark (unstructured) data using the traditional approach, it becomes a complicated task.

Approach in Netflix

Traditional Data Processing

Hence a Big Data Professional designs and creates an environment using Big Data tools to ease the processing of Netflix Data.

Big Data approach to process Netflix data

Now, let’s see how Data Scientist Optimizes the Netflix Streaming experience.

Role of Data Scientist in Optimizing the Netflix streaming experience

1. Understanding the impact of QoE on user behavior

User behavior refers to the way how a user interacts with the Netflix service, and data scientists use the data to both understand and predict behavior. For example, how would a change to the Netflix product affect the number of hours that members watch? To improve the streaming experience, Data Scientists look at QoE metrics that are likely to have an impact on user behavior. One metric of interest is the rebuffer rate, which is a measure of how often playback is temporarily interrupted. Another metric is bitrate, that refers to the quality of the picture that is served/seen — a very low bitrate corresponds to a fuzzy picture.

2. Improving the streaming experience

How do Data Scientists use data to provide the best user experience once a member hits “play” on Netflix?

One approach is to look at the algorithms that run in real-time or near real-time once playback has started, which determine what bitrate should be served, what server to download that content from, etc.

For example, a member with a high-bandwidth connection on a home network could have very different expectations and experience compared to a member with low bandwidth on a mobile device on a cellular network.

By determining all these factors one can improve the streaming experience.

3. Optimize content caching

A set of big data problems also exists on the content delivery side.

The key idea here is to locate the content closer (in terms of network hops) to Netflix members to provide a great experience. By viewing the behavior of the members being served and the experience, one can optimize the decisions around content caching.

4. Improving content quality

Another approach to improving user experience involves looking at the quality of content, i.e. the video, audio, subtitles, closed captions, etc. that are part of the movie or show. Netflix receives content from the studios in the form of digital assets that are then encoded and quality checked before they go live on the content servers.

In addition to the internal quality checks, Data scientists also receive feedback from our members when they discover issues while viewing.

By combining member feedback with intrinsic factors related to viewing behavior, they build the models to predict whether a particular piece of content has a quality issue. Machine learning models along with natural language processing (NLP) and text mining techniques can be used to build powerful models to both improve the quality of content that goes live and also use the information provided by the Netflix users to close the loop on quality and replace content that does not meet the expectations of the users.

So this is how Data Scientist optimizes the Netflix streaming experience.

Now let’s understand how Data Analytics is used to drive the Netflix success.

Role of Data Analyst in Netflix

The above figure shows the different types of users who watch the video/play on Netflix. Each of them has their own choices and preferences.

So what does a Data Analyst do?

Data Analyst creates a user stream based on the preferences of users. For example, if user 1 and user 2 have the same preference or a choice of video, then data analyst creates a user stream for those choices. And also –
Orders the Netflix collection for each member profile in a personalized way.We know that the same genre row for each member has an entirely different selection of videos.Picks out the top personalized recommendations from the entire catalog, focusing on the titles that are top on ranking.By capturing all events and user activities on Netflix, data analyst pops out the trending video.Sorts the recently watched titles and estimates whether the member will continue to watch or rewatch or stop watching etc.
I hope you have *understood *the *differences *& *similarities *between Data Science vs Big Data vs Data Analytics.

5 Prominent Big Data Analytics Tools to Learn in 2020

5 Prominent Big Data Analytics Tools to Learn in 2020

We all knew that Big Data refers to voluminous data gathered from different sources such as mobile phones, social media feeds, IoT devices, databases, servers, and applications, etc. But this data is of no use until and unless it is properly...

We all knew that Big Data refers to voluminous data gathered from different sources such as mobile phones, social media feeds, IoT devices, databases, servers, and applications, etc. But this data is of no use until and unless it is properly manipulated so that it can help to make decisions out of it.

So, to make this data meaningful in a way, certain scientific tools and methodologies have been implemented to extract valuable information from it. The overall process of analyzing data sets about the information with the support of specialized tools and technologies is referred to as Big Data analytics.

Big Data Analytics is used to process a large amount of data sets to uncover hidden patterns, market trends, customer preferences and many other useful information that can be helpful for organizations to make decisions to enhance their business.

With Big data analysis, it is possible to process the data very quickly and efficiently, which was not possible with more traditional business intelligence solutions.

Now in this article, we will focus our discussions towards a few important Big data analytics tools which are trending now in the IT industry. But before that, we want to introduce you to a set of online courses containing different courses related to Big Data concept.

Here is the list of top Big data analytics tools:

1. Apache Hadoop:

Apache Hadoop is a big analytics tool based on java, a free software framework. It facilitates effective storage of huge amount of data in a storage place known as cluster. The special feature of this framework is that it runs in parallel on a cluster and also has the ability to process huge data across all nodes in it.
Features:
• It brings flexibility in data processing
• It allows for faster data processing.

2. HPCC:

HPCC is Big data analytics tool developed by LexisNexis Risk Solutions. It stands for High- Performance Computing Cluster. This technique is more advanced and enterprise-ready. It uses a high-level programming language called Enterprise Control Language (ECL), which is based on C++.

Features:
• It is highly efficient in that it can accomplish Big Data tasks with less code
• It has the ability to automatically optimize code for parallel processing.

3. KNIME:

KNIME stands for Konstanz Information Miner. It is an open-source tool that is used for Enterprise reporting, integration, research, CRM and data mining, etc. It supports many platforms such as Linux, Windows operating systems and many more.
It is considered as a good alternative to SAS.

Features:
• It has rich algorithms set
• It automates a lot of manual work.

4. Datawrapper:

Datawrapper is an open-source platform for data visualization. Its major customers are newsrooms that are spread all over the world. Some of its notable customers are The Times, Fortune, and Twitter, etc.

Features:
• It is a device friendly. It works very well on all types of devices such as mobile, tablet or desktop.
• It has great customization and export options.

5. Lumify:

It is an open-source Big Analytics tool. Its primary features include full-text search, 2D and 3D graph visualization, link analysis between graph entitles, integration with mapping systems, and real-time collaboration through a set of projects or workspaces.

Features:
• It is scalable
• It supports cloud-based environment. Works well with Amazon’s AWS.
Here we have provided 5 prominent tools that are being used in Big Data analytics field. However, you can find a list of many more such tools here.

Wrap up:

Big Data Analytics tools are playing a very important role in the Data Science and Big Data fields. There are a number of Big Data Analytics tools available that are used by different companies. Presently in the IT industry, there is a huge scope for the IT professionals with good knowledge of any of these tools.

Considering this growth, if you are looking to learn Big Data Analytics tools, then visit these online courses that can be of great help to you.

We hope the above discussion helped our readers to know some of the Big Data Analytics tools. We like you to send your thoughts in the comment section below.

Big Data Tutorial - Big Data Cluster Administration

Big Data Tutorial - Big Data Cluster Administration

Big Data Tutorial - Big Data Cluster Administration: In SQL Server 2019 Big Data Clusters, we ensure that management services embedded with the platform provide fast scale and upgrade operations, automatic logs and metrics collection, enterprise grade secure access and high availability. In this video we will provide an overview of these administration experiences for Big Data Clusters.

In SQL Server 2019 Big Data Clusters, we ensure that management services embedded with the platform provide fast scale and upgrade operations, automatic logs and metrics collection, enterprise grade secure access and high availability. In this video we will provide an overview of these administration experiences for Big Data Clusters.