Introduction, Key Features and Use Cases - Big Data Platform

What is a Big Data Platform?

It refers to IT solutions that combine severs BigData Tools and utilities into one packaged answer, and this is then used further for managing as well as analyzing Big Data. The emphasis on why this is needed is taken care of later in the blog, but know how much data is getting created daily. This Data if not maintained well, enterprises are bound to lose out on customers. 

What is the need of Big Data Platform?

This solution combines all the capabilities and every feature of many its applications into a single solution. It generally consists of its servers, management, storage, databases, management utilities, and business intelligence.

It also focuses on providing their user with efficient analytics tools for massive datasets. These platforms are often used by data engineers to aggregate, clean, and prepare data for business analysis. Data scientists use this platform to discover relationships and patterns in large data sets using a Machine learning algorithm. The user of such platforms can custom build applications according to their use case like to calculate customer loyalty (E-Commerce user case), and so on, there are countless use cases.

What are the best Platforms?

This aims around four letters which are S, A, P, S; which means Scalability, Availability, Performance, and Security. There are various tools responsible to manage hybrid data of IT systems. The list of platforms are listed below:

  1. Hadoop Delta Lake Migration Platform
  2. Data Catalog Platform
  3. Data Ingestion Platform
  4. IoT Analytics Platform
  5. Data Integration and Management Platform
  6. ETL Data Transformation Platform

Hadoop - Delta Lake Migration Platform

It is an open-source software platform managed by Apache Software Foundation. It is used to manage and store large data sets at a low cost and with great efficiency. 

IoT Analytics Platform

It provides a wide range of tool to work upon it; this functionality of it comes handy while using it over the IoT case.

Data Ingestion Platform

This layer is the first step for the data coming from variable sources to start its journey. This means the data here is prioritized and categorized, making data flow smoothly in further layers in this process flow.

Data Mesh Platform

Data Catalog Platform

It provides a single self-service environment to the users, helping them find, understand, and trust the data source. It also helps the users to discover the new data sources if there are any. Discovering and understanding data sources are the initial steps for registering the sources. Users search for the Data Catalog Tools based on the needs and filter the appropriate results. In Enterprises, Data Lake is needed for Business Intelligence, Data Scientists, ETL Developers where the right data needed. The users use catalog discovery to find the data which fits their needs.

ETL Data Transformation Platform

This Platform can be used to build pipelines and even schedule the running of the same for data transformation. Get more insight on ETL

What are the essential components of Big Data Platform?

There are many essential components which are given as follows:

  • Data Ingestion, Management, ETL, and Warehouse – It provides these resources for effective data management and effective data warehousing, and this manages data as a valuable resource.
  • Stream Computing – Helps compute the streaming data that is used for real-time analytics.
  • Analytics/ Machine Learning – Features for advanced analytics and machine learning.
  • Integration – It provides its user with features like integrating it from any source with ease.
  • Data GovernanceData Governance also provides comprehensive security, data governance, and solutions to protect the data.
  • Provides Accurate Data – It delivers with analytic tools which in turn helps to omit any inaccurate data that has not been analyzed. This also helps the business to make the right decision by utilizing accurate information.
  • Scalability – It also helps scale the application to analyze all time climbing data; it sizes to provide efficient analysis. It offers scalable storage capacity.
  • Price Optimization – Data analytics with the help of a big data platform provides insight for B2C and B2B enterprises which helps the business to optimize the prices they charge accordingly.
  • Reduced Latency – With the set of the warehouse, analytics tools, and efficient Data transformation, it helps to reduce the data latency and provide high throughput.

What are the Big Data Analytics Use Cases?

Recommendation engines

  • Insurance Fraud Detection – Companies handling a large number of financial transactions use tools provided by this platform to look for any fraud that’s happening.
  • In Real Life – It can be used for various use cases of real-time stream processing like in the field of Media and Entertainment, Weather patterns, the Transportation industry, Banking sector, and so on.


In this section, we provided you with the details of platforms where it is being used in the Big Data environment. Based on your requirement, you can choose from these technologies in managing, operating, developing, and deploying your organization's Big Data securely.

Original article source at:

#bigdata #case #key 

Introduction, Key Features and Use Cases - Big Data Platform
Rupert  Beatty

Rupert Beatty


Top 6 AWS Cloud Use Cases which are Revolutionizing Business

Cloud Computing, a fundamentally alien concept a few years back has changed the way companies operate today. The Enterprise Cloud Market is said to reach $235.1 Billion in 2017 with competition heating up with players like Rackspace, Microsoft Azure & Google Cloud!

The key proposition of adopting the cloud is to save cost & improve efficiency within an organization. Also, companies are looking for the ideal cloud platform that helps them build a scalable global infrastructure at the lowest cost, deploy new applications instantly, scale up workload based on demand, and remain secure!

One of the early birds in Cloud Computing, Amazon Web Services (AWS) specializes in offering IT infrastructure services to businesses in the form of web services or cloud computing. Learn more with the AWS Course.

AWS has served a range of customers across different industries with different cloud needs. Be it Nokia or Pfizer or Comcast, AWS cloud has helped address organizational needs by providing customized solutions for every requirement.

Here are 6 AWS Cloud Use Cases Revolutionizing Business:

Pfizer: A Global Leader in Pharmaceutical Industry, Pfizer wanted to address its issue of handling peak computing needs. The Amazon VPC (Virtual Private Cloud) was set up to enhance Pfizer’s high-performance computing software and systems for worldwide research and development. The Virtual Private Cloud in addition helped Pfizer to respond to new challenges by increasing computational capacity beyond the capacity of its existing HPC (high-performance computing) systems. As a result, Pfizer saved additional hardware/software investments and focused investing into its other WRD activities.

Shazam: Shazam has helped it connect to over 200 million people in more than 200 Countries spanning 33 languages.  With the task of creating brand awareness, Shazam was gearing up for the Super Bowl advertising. Their need to address the expected spike in demand after the advertising campaign was addressed using the Amazon EC2 (Elastic Compute Cloud) with the Traffic distributed across these instances with Elastic Load Balancing. Along with that Shazam availed services of Amazon DynamoDB for secondary data activity & Amazon Elastic MapReduce for Big Data Analytics. With over 1 million transactions after the Superbowl event, Shazam was able to process a million transactions with Amazon EC2 and improve its efficiency.

Nokia: Nokia Corporation, a yesteryear leader in Mobile Manufacturing had its key markets in India, Asia Pacific, Africa and South America. Nokia’s Xpress Internet Services platform focussed on providing Internet services on the go for these markets. A platform that runs on 2200 servers and collects 800 GB data per day, Nokia was looking to scale the database and generate reports more efficiently than do it in a traditional database. After moving to AWS and using the Amazon Redshift(Data Warehouse) , Nokia was able to run queries twice as fast as its previous solutions and use business intelligence tools to mine and analyze big data at a 50% reduced cost!

NASA: Its most ambitious projects the Curiosity Mission which aims to put an exploration rover into Mars included an 8 month voyage. The Mission included a sophisticated & meticulous landing procedure with a ‘sky crane manoeuvre that gently lowered the Curiosity to the surface of Mars. NASA wanted to make sure that this historic moment was shared with its fans across the globe by providing real-time details of the mission to the public. NASA’s Jet Propulsion Laboratory used AWS to stream the images and videos of the Curiosity’s Landing. The use of Amazon Route 53 and Elastic Load Balancers (ELB) enabled NASA to balance the load across AWS regions and ensure the availability of its content under all circumstances.  The model helped NASA provide hundreds of gigabits/second of traffic for hundreds of thousands of viewers in real time and helped it to connect with the rest of the world.

Check out our AWS Certification Training in Top Cities

IndiaUnited StatesOther Countries
AWS Training in HyderabadAWS Training in AtlantaAWS Training in London
AWS Training in BangaloreAWS Training in BostonAWS Training in Adelaide
AWS Training in ChennaiAWS Training in NYCAWS Training in Singapore

Netflix: A renowned player in the US when it comes to online content streaming, Netflix partnered with AWS for services and delivery of content. AWS enables Netflix to execute thousands of servers and terabytes of storage within minutes. Users could stream Netflix content from anywhere in the world including web, tablets and mobile devices.

Airbnb: A company that focused on connecting property owners and travelers for renting unique vacation spaces around the world faced service administration challenges with its original provider. It soon shifted to AWS cloud where it used over 200 Amazon Elastic Compute Cloud (EC2) instances for its applications, memcache, and search servers. Along with that in order to process and analyze 50 Gigabytes of data daily, Airbnb used Amazon Elastic Mapreduce(Amazon EMR). There are other services such as Amazon Cloudwatch, AWS Management Console, Simple Storage services which were used by Airbnb as well. Airbnb was able to save costs and prepare for growth after availing the services of AWS and is completely satisfied.

If you want to work as a cloud engineer or start your career in cloud computing, now is the time to do it. You may become a successful cloud engineer by obtaining the necessary qualifications. You may also take an online Cloud Computing Course.

Got a question for us? Mention them in the comments section and we will get back to you.

Original article source at:

#aws #case 

Top 6 AWS Cloud Use Cases which are Revolutionizing Business

HBase Introduction and Facebook Case Study

As we mentioned in our Hadoop Ecosytem blog, HBase is an essential part of our Hadoop ecosystem. So now, I would like to take you through HBase tutorial, where I will introduce you to Apache HBase, and then, we will go through the Facebook Messenger case-study. We are going to cover following topics in this HBase tutorial blog:

  • History of Apache HBase 
  • Introduction of Apache HBase
  • NoSQL Databases and its types
  • HBase vs Cassandra
  • Apache HBase Features
  • HBase vs HDFS
  • Facebook Messenger Case Study

Apache HBase Tutorial: History

Let us start with the history of HBase and know how HBase has evolved over a period of time.

History of HBase - HBase Tutorial - Edureka

  • Apache HBase is modelled after Google’s BigTable, which is used to collect data and serve request for various Google services like Maps, Finance, Earth etc.
  • Apache HBase began as a project by the company Powerset for Natural Language Search, which was handling massive and sparse data sets.
  • Apache HBase was first released in February 2007. Later in January 2008, HBase became a sub project of Apache Hadoop.
  • In 2010, HBase became Apache’s top level project.

Apache HBase Tutorial: Introduction to HBase

HBase is an open source, multidimensional, distributed, scalable and a NoSQL database written in Java. HBase runs on top of HDFS (Hadoop Distributed File System) and provides BigTable like capabilities to Hadoop. It is designed to provide a fault tolerant way of storing large collection of sparse data sets.

Since, HBase achieves high throughput and low latency by providing faster Read/Write Access on huge data sets. Therefore, HBase is the choice for the applications which require fast & random access to large amount of data.

It provides compression, in-memory operations and Bloom filters (data structure which tells whether a value is present in a set or not) to fulfill the requirement of fast and random read-writes.

Let’s understand it through an example: A jet engine generates various types of data from different sensors like pressure sensor, temperature sensor, speed sensor, etc. which indicates the health of the engine. This is very useful to understand the problems and status of the flight. Continuous Engine Operations generates 500 GB data per flight and there are 300 thousand flights per day approximately. So, Engine Analytics applied to such data in near real time can be used to proactively diagnose problems and reduce unplanned downtime. This requires a distributed environment to store large amount of data with fast random reads and writes for real time processing. Here, HBase comes for the rescue. I will talk about HBase Read and Write in detail in my next blog on HBase Architecture.

As we know, HBase is a NoSQL database. So, before understanding more about HBase, lets first discuss about the NoSQL databases and its types.

Apache HBase Tutorial: NoSQL Databases

NoSQL means Not only SQL. NoSQL databases is modeled in a way that it can represent data other than tabular formats, unkile relational databases. It uses different formats to represent data in databases and thus, there are different types of NoSQL databases based on their representation format. Most of NoSQL databases leverages availability and speed over consistency. Now, let us move ahead and understand about the different types of NoSQL databases and their representation formats.   


NoSQL Databases comparision - HBase Tutorial - Edureka

Key-Value stores: 

It is a schema-less database which contains keys and values. Each key, points to a value which is an array of bytes, can be a string, BLOB, XML, etc. e.g. Lamborghini is a key and can point to a value Gallardo, Aventador, Murciélago, Reventón, Diablo, Huracán, Veneno, Centenario etc.

Key-Value stores databases: Aerospike, Couchbase, Dynamo, FairCom c-treeACE, FoundationDB, HyperDex, MemcacheDB, MUMPS, Oracle NoSQL Database, OrientDB, Redis, Riak, Berkeley DB.


Key-value stores handle size well and are good at processing a constant stream of read/write operations with low latency. This makes them perfect for User preference and profile stores, Product recommendations; latest items viewed on a retailer website for driving future customer product recommendations, Ad servicing; customer shopping habits result in customized ads, coupons, etc. for each customer in real-time.

Document Oriented:

It follows the same key value pair, but it is semi structured like XML, JSON, BSON. These structures are considered as documents.

Document Based databases: Apache CouchDB, Clusterpoint, Couchbase, DocumentDB, HyperDex, IBM Domino, MarkLogic, MongoDB, OrientDB, Qizx, RethinkDB.


As document supports flexible schema, fast read write and partitioning makes it suitable for creating user databases in various services like twitter, e-commerce websites etc.

Column Oriented:

In this database, data is stored in cell grouped in column rather than rows. Columns are logically grouped into column families which can be either created during schema definition or at runtime.

These types of databases store all the cell corresponding to a column as continuous disk entry, thus making the access and search much faster.  

Column Based Databases: HBase, Accumulo, Cassandra, Druid, Vertica.


It supports the huge storage and allow faster read write access over it. This makes column oriented databases suitable for storing customer behaviors in e-commerce website, financial systems like Google Finance and stock market data, Google maps etc. 

Graph Oriented:

It is a perfect flexible graphical representation, used unlike SQL. These types of databases easily solve address scalability problems as it contains edges and node which can be extended according to the requirements.

Graph based databases: AllegroGraph, ArangoDB, InfiniteGraph, Apache Giraph, MarkLogic, Neo4J, OrientDB, Virtuoso, Stardog.


This is basically used in Fraud detection, Real-time recommendation engines (in most cases e-commerce), Master data management (MDM), Network and IT operations, Identity and access management (IAM), etc.


HBase and Cassandra are the two famous column oriented databases. So, now talking it to a higher level, let us compare and understand the architectural and working differences between HBase and Cassandra.

HBase Tutorial: HBase VS Cassandra

  • HBase is modelled on BigTable (Google) while Cassandra is based on DynamoDB (Amazon) initially developed by Facebook.
  • HBase leverages Hadoop infrastructure (HDFS, ZooKeeper) while Cassandra evolved separately but you can combine Hadoop and Cassandra as per your needs.
  • HBase has several components which communicate together like HBase HMaster, ZooKeeper, NameNode, Region Severs. While Cassandra is a single node type, in which all nodes are equal and performs all functions. Any node can be the coordinator; this removes Single Point of failure.
  • HBase is optimized for read and supports single writes, which leads to strict consistency. HBase supports Range based scans, which makes scanning process faster. Whereas Cassandra supports single row reads which maintains eventual consistency.
  • Cassandra does not support range based row scans, which slows the scanning process as compared to HBase.
  • HBase supports ordered partitioning, in which rows of a Column Family are stored in RowKey order, whereas in Casandra ordered partitioning is a challenge. Due to RowKey partitioning the scanning process is faster in HBase as compared to Cassandra.
  • HBase does not support read load balancing, one Region Server serves the read request and the replicas are only used in case of failure. While Cassandra supports read load balancing and can read the same data from various nodes. This can compromise the consistency.
  • In CAP (Consistency, Availability & Partition -Tolerance) theorem HBase maintains Consistency and Availability while Cassandra focuses on Availability and Partition -Tolerance.

Now let’s take a deep dive and understand the features of Apache HBase which makes it so popular.

Apache HBase Tutorial: Features of HBase

Features of HBase - HBase Tutorial - Edureka


  • Atomic read and write: On a row level, HBase provides atomic read and write. It can be explained as, during one read or write process, all other processes are prevented from performing any read or write operations.
  • Consistent reads and writes: HBase provides consistent reads and writes due to above feature.
  • Linear and modular scalability: As data sets are distributed over HDFS, thus it is linearly scalable across various nodes, as well as modularly scalable, as it is divided across various nodes.
  • Automatic and configurable sharding of tables: HBase tables are distributed across clusters and these clusters are distributed across regions. These regions and clusters split, and are redistributed as the data grows.
  • Easy to use Java API for client access: It provides easy to use Java API for programmatic access.
  • Thrift gateway and a REST-ful Web services: It also supports Thrift and REST API for non-Java front-ends.
  • Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization .
  • Automatic failure support: HBase with HDFS provides WAL (Write Ahead Log) across clusters which provides automatic failure support.
  • Sorted rowkeys: As searching is done on range of rows, HBase stores rowkeys in a lexicographical order. Using these sorted rowkeys and timestamp, we can build an optimized request.

Now moving ahead in this HBase tutorial, let me tell you what are the use-cases and scenarios where HBase can be used and then, I will compare HDFS and HBase.

I would like draw your attention toward the scenarios in which the HBase is the best fit. 

HBase Tutorial: Where we can use HBase?

  • We should use HBase where we have large data sets (millions or billions or rows and columns) and we require fast, random and real time, read and write access over the data.
  • The data sets are distributed across various clusters and we need high scalability to handle data.
  • The data is gathered from various data sources and it is either semi structured or unstructured data or a combination of all. It could be handled easily with HBase.
  • You want to store column oriented data.
  • You have lots of versions of the data sets and you need to store all of them.

Before I jump to Facebook messenger case study, let me tell you what are the differences between HBase and HDFS.

HBase Tutorial: HBase VS HDFS

HDFS is a Java based distributed file system that allows you to store large data across multiple nodes in a Hadoop cluster. So, HDFS is an underlying storage system for storing the data in the distributed environment. HDFS is a file system, whereas HBase is a database (similar as NTFS and MySQL).

As Both HDFS and HBase stores any kind of data (i.e. structured, semi-structured and unstructured) in a distributed environment so lets look at the differences between HDFS file system and HBase, a NoSQL database. 

  • HBase provides low latency access to small amounts of data within large data sets while HDFS provides high latency operations.
  • HBase supports random read and writes while HDFS supports WORM (Write once Read Many or Multiple times).
  • HDFS is basically or primarily accessed through MapReduce jobs while HBase is accessed through shell commands, Java API, REST, Avro or Thrift API.

HDFS stores large data sets in a distributed environment and leverages batch processing on that data. E.g. it would help an e-commerce website to store millions of customer’s data in a distributed environment which grew over a long period of time(might be 4-5 years or more). Then it leverages batch processing over that data and analyze customer behaviors, pattern, requirements. Then the company could find out what type of product, customer purchase in which months. It helps to store archived data and execute batch processing over it. 

While HBase stores data in a column oriented manner where each column is stored together so that, reading becomes faster leveraging real time processing. E.g. in a similar e-commerce environment, it stores millions of product data. So if you search for a product among millions of products, it optimizes the request and search process, producing the result immediately (or you can say in real time). The detailed HBase architectural explanation, I will be covering in my next blog.

As we know HBase is distributed over HDFS, so a combination of both gives us a great opportunity to use the benefits of both, in a tailored solution, as we are going to see in the below Facebook messenger case study.   


HBase Tutorial: Facebook Messenger Case Study

Facebook Messaging Platform shifted from Apache Cassandra to HBase in November 2010.

Facebook Messenger combines Messages, email, chat and SMS into a real-time conversation. Facebook was trying to build a scalable and robust infrastructure to handle set of these services.

At that time the message infrastructure handled over 350 million users sending over 15 billion person-to-person messages per month. The chat service supports over 300 million users who send over 120 billion messages per month.

By monitoring the usage, they found out that, two general data patterns emerged:

  • A short set of temporal data that tends to be volatile
  • An ever-growing set of data that rarely gets accessed

Facebook wanted to find a storage solution for these two usage patterns and they started investigating to find a replacement for the existing Messages infrastructure.

Earlier in 2008, they used open-source database, i.e. Cassandra, which is an eventual-consistency key-value store that was already in production serving traffic for Inbox Search. Their teams had a great knowledge in using and managing a MySQL database, so switching either of the technologies was a serious concern for them.

They spent a few weeks testing different frameworks, to evaluate the clusters of MySQL, Apache Cassandra, Apache HBase and other systems. They ultimately selected HBase.

As MySQL failed to handle the large data sets efficiently, as the indexes and data sets grew large, the performance suffered. They found Cassandra unable to handle difficult pattern to reconcile their new Messages infrastructure.

The major problems were: 

  • Storing the large sets of continuously growing data from various Facebook services.
  • Requires Database which can leverage high processing on it.
  • High performance needed to serve millions of requests.
  • Maintaining consistency in storage and performance.

Facebook Challenges - HBase Tutorial - Edureka

Figure: Challenges faced by Facebook messenger

For all these problems, Facebook came up with a solution i.e. HBase. Facebook adopted HBase for serving Facebook messenger, chat, email, etc.  due to its various features.

HBase comes with very good scalability and performance for this workload with a simpler consistency model than Cassandra. While they found HBase to be the most suitable in terms of their requirements like auto load balancing and failover, compression support, multiple shards per server, etc.

HDFS, which is the underlying file system used by HBase also provided them several needed features such as end-to-end checksums, replication and automatic load re-balancing.

Facebook HBase Solution - HBase Tutorial - Edureka

Figure: HBase as a solution to Facebook messenger

As they adopted HBase, they also focused on committing the results back to HBase itself and started working closely with the Apache community.

Since messages accept data from different sources such as SMS, chats and emails, they wrote an application server to handle all decision making for a user’s message. It interfaces with large number of other services. The attachments are stored in a Haystack (which works on HBase). They also wrote a user discovery service on top of Apache ZooKeeper which talk to other infrastructure services for friend relationships, email account verification, delivery decisions and privacy decisions.

Facebook team spent a lot of time confirming that each of these services is robust, reliable and providing good performance to handle a real-time messaging system.

I hope this HBase tutorial blog is informative and you liked it. In this blog, you got to know the basics of HBase and its features. In my next blog of Hadoop Tutorial Series, I will be explaining the architecture of HBase and working of HBase which makes it popular for fast and random read/write.


Now that you have understood the basics of HBase, check out the Hadoop training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka Big Data Hadoop Certification Training course helps learners become expert in HDFS, Yarn, MapReduce, Pig, Hive, HBase, Oozie, Flume and Sqoop using real-time use cases on Retail, Social Media, Aviation, Tourism, Finance domain.

Got a question for us? Please mention it in the comments section and we will get back to you.

HBase Tutorial | NoSQL Databases | Edureka

After knowing about the history of Apache HBase, you would be curious to know what is Apache HBase? Let us move further and take a look.

Original article source at:

#hbase #facebook #case #study 

HBase Introduction and Facebook Case Study
Mike  Kozey

Mike Kozey


A Lightweight Extension Library for Transforming String Cases


A lightweight extension library for transforming string cases


Add the dependency to your pubspec.yaml:

  string_case_converter: 1.0.0

Run pub get to install.

Getting Started

The package handles converting words to:

Camel Case

print('hello there'.toCamelCase()); // helloThere

Pascal Case

print('hello there'.toPascalCase()); // HelloThere

Kebab Case

print('hello there'.toKebabCase()); // hello-there

Snake Case

print('hello!@£(^)^%*&%^%^% there'.toSnakeCase()); // hello_there

Constant Case

print('hello there'.toConstantCase()); //  HELLO_THERE

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add string_case_converter

With Flutter:

 $ flutter pub add string_case_converter

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

  string_case_converter: ^1.0.0

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:string_case_converter/string_case_converter.dart';


import 'package:string_case_converter/string_case_converter.dart';

void main() {
  print('hello there'.toCamelCase()); // helloThere

  print('hello there'.toPascalCase()); // HelloThere

  print('hello there'.toKebabCase()); // hello-there

  print('hello!@£(^)^%*&%^%^% there'.toSnakeCase()); // hello_there

  print('hello there'.toConstantCase()); //  HELLO_THERE


For a detailed changelog, see the file

Original article source at:

#flutter #dart #string #case 

A Lightweight Extension Library for Transforming String Cases
Nat  Grady

Nat Grady


A Systematic Approach to Parse Strings & Automate The Conversion



The snakecase package introduces a fresh and straightforward approach on case conversion, based upon a consistent design philosophy.

For a short intro regarding typical use cases, see the blog article Introducing the snakecase package.

Install and load

# install snakecase from cran
# install.packages("snakecase")

# or the (stable) development version hosted on github
# install.packages("remotes")

# load snakecase

Basic usage

The workhorse function of this package is to_any_case(). It converts strings (by default) into snake case:

string <- c("lowerCamelCase", "ALL_CAPS", "I-DontKNOWWhat_thisCASE_is")

## [1] "lower_camel_case"              "all_caps"                     
## [3] "i_dont_know_what_this_case_is"

However, one can choose between many other cases like "lower_camel", "upper_camel", "all_caps", "lower_upper", "upper_lower", "sentence" and "mixed", which are based on "parsed" case:

to_any_case(string, case = "parsed")
## [1] "lower_Camel_Case"              "ALL_CAPS"                     
## [3] "I_Dont_KNOW_What_this_CASE_is"

Also shortcuts (wrappers around to_any_case()) are provided:

## [1] "LowerCamelCase"          "AllCaps"                
## [3] "IDontKnowWhatThisCaseIs"

Be aware that automatic case conversion depends on the input string and it is therefore recommended to verify the results. You might want to pipe these into dput() and hardcode name changes instead of blindly trusting the output:


to_snake_case(c("SomeBAdInput", "someGoodInput")) %>% dput()
## c("some_b_ad_input", "some_good_input")

Big picture (a parameterized workflow)

The to_any_case() function basically enables you to convert any string into any case. This is achieved via a well thought process of parsing (abbreviations, sep_in, parsing_option), conversion (transliterations, case) and postprocessing (numerals, sep_out). The specific arguments allow you to customize the pipeline.

On this example, you can see the whole pipeline including some implementation details.

Some further cosmetics (unique_sep, empty_fill, prefix, postfix) can be applied to the output, see arguments.


string: A character vector, containing the input strings.


abbreviations: One challenge in case conversion are odd looking “mixed cases”. These might be introduced due to country codes or other abbreviations, which are usually written in upper case. Before you consider a different parsing_option (see below), you might just want to use the abbreviations argument:

to_snake_case(c("HHcity", "IDTable1", "KEYtable2", "newUSElections"),
              abbreviations = c("HH", "ID", "KEY", "US"))
## [1] "hh_city"          "id_table_1"       "key_table_2"     
## [4] "new_us_elections"

Abbreviations are consistently formatted regarding the supplied case. However, for title-, mixed-, lower-camel- and upper-camel-case the formatting is specified by the formatting of the input:

to_upper_camel_case(c("user_id", "finals_mvp"), abbreviations = c("Id", "MVP"))
## [1] "UserId"    "FinalsMVP"

sep_in: By default non-alphanumeric characters are treated as separators:

## [1] "malte_grosser_gmail_com"

To suppress this behaviour, just set sep_in = NULL:

to_snake_case("", sep_in = NULL)
## [1] ""

Since sep_in takes regular expressions as input, to_any_case() becomes very flexible. We can for example express that dots behind digits should not be treated as separators, since they might be intended as decimal marks:

to_snake_case("Pi.Value:3.14", sep_in = ":|(?<!\\d)\\.")
## [1] "pi_value_3.14"

parsing_option: We can modify the abbreviations example a bit. In this case, another parsing option might be handy:

to_snake_case(c("HHcity", "IDtable1", "KEYtable2", "newUSelections"),
              parsing_option = 2)
## [1] "hh_city"          "id_table_1"       "key_table_2"     
## [4] "new_us_elections"

Sometimes it might make sense to treat mixes of words and abbreviations as one word:

to_snake_case(c("HHcity", "IDtable1", "KEYtable2", "newUSelections"),
              parsing_option = 3)
## [1] "hhcity"          "idtable_1"       "keytable_2"      "new_uselections"

To suppress conversion after a non-alphanumeric characters (except "_"), you can add a minus in front of the parsing_option, e.g.:

                    sep_in = NULL,
                    parsing_option = -1)
## [1] ""

And to suppress the parsing set parsing_option = 0.

If you are interested in the implementation of a specific parsing_option, please open an issue.


transliterations: To turn special characters (for example) into ASCII one can incorporate transliterations from stringi::stri_trans_list() or this package (also in combination):

to_upper_camel_case("Doppelgänger is originally german",
                    transliterations = "german")
## [1] "DoppelgaengerIsOriginallyGerman"

to_snake_case("Schönes Café",
              transliterations = c("german", "Latin-ASCII"))
## [1] "schoenes_cafe"

Additionally it is easy to specify tranliterations or more general any replacement as a named element of the character vector supplied to the transliterations argument:

            transliterations = c("boy" = "baby", "snake" = "screaming_snake"))

to_snake_case("column names 100 % snake case", sep_in = NULL, 
              transliterations = c("%" = "percent"), postfix = ";-)")
## [1] "column_names_100_percent_snake_case;-)"

If you can provide transliterations for your (or any other) country, please drop them within this issue.

case: The desired target case, provided as one of the following:

  • snake_case: "snake"
  • lowerCamel: "lower_camel" or "small_camel"
  • UpperCamel: "upper_camel" or "big_camel"
  • ALL_CAPS: "all_caps" or "screaming_snake"
  • lowerUPPER: "lower_upper"
  • UPPERlower: "upper_lower"
  • Sentence case: "sentence"
  • Title Case: "title" - This one is basically the same as sentence case with, but in addition it is wrapped into tools::toTitleCase() and abbreviations are always turned into upper case.

There are six “special” cases available:

  • "parsed": This case is underlying all other cases. Every substring a string consists of becomes surrounded by an underscore (depending on the parsing_option). Underscores at the start and end are trimmed. No lower or upper case pattern from the input string are changed.
  • "mixed": Almost the same as case = "parsed". Every letter which is not at the start or behind an underscore is turned into lowercase. If a substring is set as an abbreviation, it will be turned into upper case.
  • "swap": Upper case letters will be turned into lower case and vice versa. Also case = "flip" will work. Doesn’t work with any of the other arguments except unique_sep, empty_fill, prefix and postfix.
  • "random": Each letter will be randomly turned into lower or upper case. Doesn’t work with any of the other arguments except unique_sep, empty_fill, prefix and postfix.
  • "none": Neither parsing nor case conversion occur. This case might be helpful, when one wants to call the function for the quick usage of the other parameters. To suppress replacement of spaces to underscores set sep_in = NULL. Works with sep_in, transliterations, sep_out, unique_sep, empty_fill, prefix and postfix.
  • "internal_parsing": This case is returning the internal parsing (suppressing the internal protection mechanism), which means that alphanumeric characters will be surrounded by underscores. It should only be used in very rare use cases and is mainly implemented to showcase the internal workings of to_any_case().


numerals: If you want to format the alignment of numerals use numerals ("middle" (default), "left", "right", "asis" or "tight"). E.g. to add no extra separators around digits use:

to_snake_case("species42value 23month 7-8",
              numerals = "asis")
## [1] "species42value_23month_7_8"

sep_out: For the creation of other well known or completely new cases it is possible to adjust the output separator (sep_out):

to_snake_case(string, sep_out = ".")
## [1] ""              "all.caps"                     
## [3] ""

to_mixed_case(string, sep_out = " ")
## [1] "lower Camel Case"              "All Caps"                     
## [3] "I Dont Know What this Case is"

to_screaming_snake_case(string, sep_out = "=")
## [1] "LOWER=CAMEL=CASE"              "ALL=CAPS"                     

When length(sep_out) > 1, its last element gets recycled and the output separators are incorporated per string according the order in sep_in. This might come in handy when e.g. formatting file names:

  string = c("YYYY_MM.DD_bla_bla_bla",
  sep_out = c("", "", "-", "_"),
  postfix = ".txt")
## [1] "yyyymmdd-bla_bla_bla.txt" "20190109-bla_bla_bla.txt"


unique_sep: (character): When not NULL non unique output strings will get an integer suffix separated with the supplied string.

empty_fill: (character): Empty output ("") will be replaced by this string.

prefix: (character): simple prefix.

postfix: (character): simple post-/suffix.

Design decisions


to_any_case() is an attempt to provide good low level control, while still being high level enough for daily usage. For another example of case conversion with good default settings, you can look into the clean_names() function from the janitor package, which works directly on data frames. You can also look into the sjPlot package, where automatic case conversion is used to provide nice default labels within graphics.

For daily usage (especially when preparing fixed scripts) I recommend to combine to_any_case() with dput(). In this way, you can quickly inspect, if the output is as intended and hardcode the results (which is basically safer and good practice in my opinion). In very complex cases you might just want to manually fix the output from dput() instead of tweeking with the arguments too much.

Dependencies, vectorisation, speed and special input handling

The package is internally build up on the stringr package, which means that many powerful features are provided “by default”:

  • to_any_case() is vectorised over most of its arguments like string, sep_in, sep_out, empty_fill, prefix and postfix.
  • internal character operations are super fast c++. However, some speed is lost due to a more systematic and maintainable implementation.
  • special input like character(0), NA etc. is handled in exactly the same consistent and convenient manner as in the stringr package.

Known limitations

In general combinations of one letter words or abbreviations are hard to convert back from cases with "" as default separator:

However, it it not impossible:

to_snake_case("ABCD", sep_out = ".",
              transliterations = c("^ABCD$" = "A_B_C_D"))
## [1] "a.b.c.d"

to_snake_case("BVBFCB:5-2", sep_in = ":",
              transliterations = c("^BVBFCB" = "BVB_FCB"))
## [1] "bvb_fcb_5-2"
to_any_case("a_b_c_d", case = "upper_camel")
## [1] "ABCD"

Sometimes further pre- or postprocessing might be needed. For example you can easily write your own parsing via a sequence of calls like str_replace_all(string, (some_pattern), "_\\1_").

You can decide yourself: Open an issue here or build sth. quickly yourself via packages like base, stringr, stringi etc.

Design Philosophy

Practical influences

Conversion to a specific target case is not always obvious or unique. In general a clean conversion can only be guaranteed, when the input-string is meaningful.

Take for example a situation where you have IDs for some customers. Instead of calling the column “CustomerID” you abbreviate it to “CID”. Without further knowledge about the meaning of CID it will be impossible to know that it should be converted to “c_id”, when using to_snake_case(). Instead it will be converted to:

## [1] "cid"

We could have also converted to “c_i_d” and if we don’t know the meaning of “CID”, we can’t decide which one is the better solution. However, it is easy to exclude specific approaches by counterexamples. So in practice it might be nicer to convert “SCREAMING_SNAKE_CASE” to “screaming_snake_case” instead of “s_c_r_e_a_m_i_n_g_s_n_a_k_e_c_a_s_e” (or “screamin_g_snak_e_cas_e” or “s_creaming_s_nake_c_ase”), which means that also “cid” is preferable to “c_i_d” (or “c_id” or “ci_d”) without further knowledge.

Since the computer can’t know, that we want “c_id” by himself. It is easiest, if we provide him with the right information (here in form of a valid PascalCase syntax):

## [1] "c_id"

In this way it is guaranteed to get the correct conversion and the only chance of an error lies in an accidentally wrong provided input string or a bug in the converter function to_snake_case() (or a sequence of (one letter) abbreviations, see known limitations).

Consistent behaviour

In many scenarios the analyst doesn’t have a big influence on naming conventions and sometimes there might occur situations where it is not possible to find out the exact meaning of a variable name, even if we ask the original author. In some cases data might also have been named by a machine and the results can be relatively technically. So in general it is a good idea to compare the input of the case converter functions with their output, to see if the intended meanings at least seem to be preserved.

To make this as painless as possible, it is best to provide a logic that is robust and can handle also relatively complex cases. Note for example the string “RStudio”. How should one convert it to snake case? We have seen a similar example with “CId”, but for now we focus on sth. different. In case of “RStudio”, we could convert to:

  1. “r_s_tudio”,
  2. “rs_tudio” or
  3. “r_studio”.

If we are conservative about any assumptions on the meaning of “RStudio”, we can’t decide which is the correct conversion. It is also not valid to assume that “RStudio” was intentionally written in PascalCase. Of course we know that “r_studio” is the correct solution, but we can get there also via different considerations. Let us try to convert our three possible translations (back) to PascalCase and from there back to snake case. What should the output look like?

  1. r_s_tudio -> RSTudio -> r_s_t_udio
  2. rs_tudio -> RsTudio -> rs_tudio
  3. r_studio -> RStudio -> r_studio

Both of the first two alternatives can’t be consistently converted back to a valid Pascal case input (“RStudio”) and with the first logic the further snake case conversion seems to be complete nonsense. Only the latter case is consistent, when converting back to PascalCase, which is the case of the input “RStudio”. It is also consistent to itself, when converting from PascalCase back to snake_case.

In this way, we can get a good starting point on how to convert specific strings to valid snake_case. Once we have a clean snake_case conversion, we can easily convert further to smallCamelCase, BigCamelCase, SCREAMING_SNAKE_CASE or anything else.

Three rules of consistency

In the last sections we have seen, that it is reasonable to bring a specific conversion from an input string to some standardized case into question. We have also seen, that it is helpful to introduce some tests on the behavior of a specific conversion pattern in related cases. The latter can help to detect inappropriate conversions and also establishes a consistent behavior when converting exotic cases or switching between standardized cases. Maybe we can generalize some of these tests and introduce some kind of consistency patterns. This would enable us that whenever inappropriate or non-unique possibilities for conversions appear, we have rules that help us to deal with this situation and help to exclude some inconsistent conversion alternatives.

During the development of this package I recognized three specific rules that seem reasonable to be valid whenever cases are converted. To be more general we just use to_x() and to_y() to refer to any two differing converter functions from the set of functions including to_snake_case(), to_screaming_snake_case(), to_lower_camel_case and to_upper_camel_case(). (Other cases like “lower_upper” or “upper_lower” could be included, if we consider parsing_option = 2 within the equations.)

When we have converted to a standardized case, a new conversion to the case should not change the output:

to_x(to_x(string)) = to_x(string)

When converting to a specific case, it should not matter if a conversion to another case happened already:

to_y(to_x(string)) = to_y(string)

It should always be possible to switch between different cases, without any loss of information:

to_x(to_y(to_x(string))) = to_x(string)

Note that it can easily be shown, that rule three follows from the second rule. However, it seems reasonable to express each by its own, since they all have an interpretation and together they give a really good intuition about the properties of the converter functions.


To give a meaningful conversion for different cases, we systematically designed test-cases for conversion to snake, small- and big camel case among others. To be consistent regarding the conversion between different cases, we also test the rules above on all test-cases.

Related Resources

Please note that the snakecase package is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Author: Tazinho
Source Code: 
License: GPL-3.0 license

#r #case 

A Systematic Approach to Parse Strings & Automate The Conversion

Rate Limit Auto-configure for Spring Cloud Netflix Zuul


Module to enable rate limit per service in Netflix Zuul.

There are five built-in rate limit approaches:

  • Authenticated User
    • Uses the authenticated username or 'anonymous'
  • Request Origin
    • Uses the user origin request
  • URL
    • Uses the request path of the downstream service
  • URL Pattern
    • Uses the request Ant path pattern to the downstream service
  • ROLE
    • Uses the authenticated user roles
  • Request method
    • Uses the HTTP request method
  • Request header
    • Uses the HTTP request header
  • Global configuration per service:
    • This one does not validate the request Origin, Authenticated User or URI
    • To use this approach just don’t set param 'type'
NoteIt is possible to combine Authenticated User, Request Origin, URL, ROLE and Request Method just adding multiple values to the list


NoteLatest version: Maven Central
NoteIf you are using Spring Boot version 1.5.x you MUST use Spring Cloud Zuul RateLimit version 1.7.x. Please take a look at the Maven Central and pick the latest artifact in this version line.

Add the dependency on pom.xml


Add the following dependency accordingly to the chosen data storage:





Spring Data JPA


This implementation also requires a database table, bellow here you can find a sample script:

  rate_key VARCHAR(255) NOT NULL,
  remaining BIGINT,
  remaining_quota BIGINT,
  reset BIGINT,
  expiration TIMESTAMP,
  PRIMARY KEY(rate_key)

Bucket4j JCache


Bucket4j Hazelcast (depends on Bucket4j JCache)


Bucket4j Infinispan (depends on Bucket4j JCache)


Bucket4j Ignite (depends on Bucket4j JCache)


Sample YAML configuration

    key-prefix: your-prefix
    enabled: true
    repository: REDIS
    behind-proxy: true
    add-response-headers: true
      response-status-code: 404 #default value is 403 (FORBIDDEN)
    default-policy-list: #optional - will apply unless specific policy exists
      - limit: 10 #optional - request number limit per refresh interval window
        quota: 1000 #optional - request time limit per refresh interval window (in seconds)
        refresh-interval: 60 #default value (in seconds)
        type: #optional
          - user
          - origin
          - url
          - http_method
        - limit: 10 #optional - request number limit per refresh interval window
          quota: 1000 #optional - request time limit per refresh interval window (in seconds)
          refresh-interval: 60 #default value (in seconds)
          type: #optional
            - user
            - origin
            - url
        - type: #optional value for each type
            - user=anonymous
            - url=/api #url prefix
            - role=user
            - http_method=get #case insensitive
            - http_header=customHeader
        - type:
            - url_pattern=/api/*/payment

Sample Properties configuration




# Adding multiple rate limit type

# Adding the first rate limit policy to "myServiceId"

# Adding the second rate limit policy to "myServiceId"

Both 'quota' and 'refresh-interval', can be expressed with Spring Boot’s duration formats:

A regular long representation (using seconds as the default unit)

The standard ISO-8601 format used by java.time.Duration (e.g. PT30M means 30 minutes)

A more readable format where the value and the unit are coupled (e.g. 10s means 10 seconds)

Available implementations

There are eight implementations provided:

ImplementationData Storage
SpringDataRateLimiterSpring Data

Bucket4j implementations require the relevant bean with @Qualifier("RateLimit"):

JCache - javax.cache.Cache

Hazelcast -

Ignite - org.apache.ignite.IgniteCache

Infinispan - org.infinispan.functional.ReadWriteMap

Common application properties

Property namespace: zuul.ratelimit

Property nameValuesDefault Value
default-policy-listList of Policy-
policy-listMap of Lists of Policy-
postFilterOrderintFilterConstants.SEND_RESPONSE_FILTER_ORDER - 10

Deny Request properties

Property nameValuesDefault Value
originslist of origins to have the access denied-
response-status-codethe http status code to be returned on a denied request403 (FORBIDDEN)

Policy properties:

Property nameValuesDefault Value
limitnumber of requests-
quotatime of requests-

Further Customization

This section details how to add custom implementations

Key Generator

If the application needs to control the key strategy beyond the options offered by the type property then it can be done just by creating a custom RateLimitKeyGenerator bean[1] implementation adding further qualifiers or something entirely different:

  public RateLimitKeyGenerator ratelimitKeyGenerator(RateLimitProperties properties, RateLimitUtils rateLimitUtils) {
      return new DefaultRateLimitKeyGenerator(properties, rateLimitUtils) {
          public String key(HttpServletRequest request, Route route, RateLimitProperties.Policy policy) {
              return super.key(request, route, policy) + ":" + request.getMethod();

Error Handling

This framework uses 3rd party applications to control the rate limit access and these libraries are out of control of this framework. If one of the 3rd party applications fails, the framework will handle this failure in the DefaultRateLimiterErrorHandler class which will log the error upon failure.

If there is a need to handle the errors differently, it can be achieved by defining a custom RateLimiterErrorHandler bean[2], e.g:

  public RateLimiterErrorHandler rateLimitErrorHandler() {
    return new DefaultRateLimiterErrorHandler() {
        public void handleSaveError(String key, Exception e) {
            // custom code

        public void handleFetchError(String key, Exception e) {
            // custom code

        public void handleError(String msg, Exception e) {
            // custom code

Event Handling

If the application needs to be notified when a rate limit access was exceeded then it can be done by listening to RateLimitExceededEvent event:

    public void observe(RateLimitExceededEvent event) {
        // custom code


Spring Cloud Zuul Rate Limit is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

Download Details:
Author: marcosbarbero
Source Code:
License: Apache-2.0 License

#spring  #spring-boot  #java 

Rate Limit Auto-configure for Spring Cloud Netflix Zuul



Ripple's STRONG Case vs. SEC!! Implications For XRP? 🤔

Ripple’s STRONG Case vs. SEC!! Implications For XRP? 🤔

    0:00 Intro
    1:07 Ripple Lawsuit Recap
    5:20 XRP Army In Court?
    6:56 Ripple & XRP News
    8:40 Ripple Lawsuit Update
    10:56 XRP Price Analysis
    12:46 Ripple Lawsuit Outcome Analysis
    15:46 Conclusion

📺 The video in this post was made by Coin Bureau
️ The origin of the article:

🔺 DISCLAIMER: The article is for information sharing. The content of this video is solely the opinions of the speaker who is not a licensed financial advisor or registered investment advisor. Not investment advice or legal advice.
Cryptocurrency trading is VERY risky. Make sure you understand these risks and that you are responsible for what you do with your money
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
⭐ ⭐ ⭐ Join to Get free ‘GEEK coin’ (GEEKCASH coin)! ☞⭐ ⭐ ⭐
(There is no limit to the amount of credit you can earn through referrals)
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#bitcoin #blockchain #strong #case #sec #xrp

Ripple's STRONG Case vs. SEC!! Implications For XRP? 🤔

Case Study on Mobile app Giffi - Prismetric

Giffi is a mobile application that provides multiple services to the customer like deliveries of food, grocery, liquor, pet food, flower, parcel, and any eCommerce purchasable items.

Client Requirement

User should be able to get the service like deliveries of food, grocery, liquor, pet food, flower, parcel and also eCommerce platform through the single mobile application.
Users should able to pay for the delivery, track, and get the delivery item successfully.
The merchant should able to register, choose the appropriate category for the service, list out the product/item and receive order/delivery requests.
The driver should able to register, choose the delivery category, get the delivery request automatically, and deliver the item/product from merchant location to customer location.
The application should support multiple languages.

Giffi User App features and functionalities

Users can see the various merchant & product details from each and individual categories like food, grocery, liquor, pet food, flower, parcel delivery, and eCommerce platform
Users can choose the delivery type, search & filter the product/item. Also view details of the product, order product, and pay through the mobile application
Manage shopping cart details, order details, and manage the returns of the orders
Able to request home delivery or take away the order
Chat with the merchant
Track the order status and driver on the map with ETA
Provide the rating and review to the driver and merchant
Manage profile information, delivery address, and settings

Giffi Merchant App features and functionalities

Register into the panel by choosing the category of the business from delivering food, grocery, liquor, pet food, flower, parcel or eCommerce platform
Merchant can manage the product listing and offer & discount details
Merchant can view and manage the e-wallet and transaction history
Merchant can chat with the users
Merchant can management the orders
Merchant can receive rating and review from the user
Manage merchant profile and other settings

Giffi Driver App features and functionalities

Register into the application by choosing the category from deliveries of food, grocery, liquor, pet food, flowers, parcel or eCommerce. Or through personal information and vehicle information
Drivers can manage the availability status - online and offline.
Drivers receive the delivery request from the customer automatically.
Manage the delivery request and delivery of the items/products from the merchant’s location to customer’s location
Driver can receive the rating and review from the customer
Driver can view and manage the e-wallet and transaction history
Driver can chat with the customer
Manage driver profile and other settings

The Solution we provided

Technical Specification & Implementation
Android: Android Studio with Java
iOS: XCode with Swift
Web panel: PHP with the Codeigniter Framework
It was quite a challenging task to manage the complete application architecture as it involved multiple categories for the services and assigning them to an appropriate driver. From the perspective of the UI/UX design, it was tricky as well. It has been managed in such a way that it does not make it difficult to interact with the mobile application; the navigation is made easier for each user type.


We successfully developed & implemented the mobile application (Android & iOS) for the customer & driver and web panel for the merchant. Through mobile app, a customer can order product from service like deliveries of food, grocery, liquor, pet food, flower, parcel and also eCommerce platform through single mobile application.
Link for Play Store Giffi User App
Link for App Store Giffi User App
Link for Play Store Giffi Courier App
Link for App Store Giffi Courier App

Continue Reading:

#case #mobile-app-case-study #app-case-study #mobile-app-development-company #app-development

Case Study on Mobile app Giffi - Prismetric

Case Study on mobile app : Proclapp- Prismetric

Proclapp application is all about sharing knowledge, asking & answering the questions, writing articles, upload & request the user and expert through voice notes and video, upload and share research papers. It brings all intellectuals and knowledge seekers together to create and share knowledgeable content for the benefit of humanity.

Client Requirement
To develop the mobile app and web application to create a platform to bring together people where they can ask, answer/solve the questions/problem, seek the help of expert through voice & video publish research papers and chat with a friend with followers/following functionalities.

Application features and functionality
➛ View and interact with the category wise questions/answers, articles, voice, video, and research paper posted by the other users.
➛ Able to answer the questions like discuss the articles, carry forward the questions & articles to discuss later/future date, and flash the questions and articles to the connections.
➛ Ask new questions, write new articles, make voice and video notes to request expert and upload the research paper.
➛ View questions/answers, articles, voice & video and uploaded research paper separately.
➛ Search for the questions, article and research paper.
➛ Navigate the other menus.
➛ Able to see the list of experts and profile of them.
➛ Ask questions and seek expert advice from them.
➛ Able to post the advertisement with Description, Images, and links.
➛ Choose the time for the advertisement to stay live and proceed with the payment for that.
➛ View & edit profile details and save the changes.
➛ Add and modify the exciting topics and categories.
➛ View the questions/answers, article, voice, video, and research paper posted by the user.
➛ View number of followers & following and list of it.
➛ Trending topics
➛ List of Expert
➛ View the list of friends and suggestions
➛ Able to add as a friend.
➛ Able to chat with friends one to one
➛ Receive notification for each and every action
➛ On/off functionalities for each and every notification
➛ Change password
➛ Help and Support
➛ Bookmark
➛ Category and Topics

Managing the carryover details of the individual user.
Technical Specification & Implementation
➛ Android: Android Studio with Java
➛ iOS: XCode with Swift
➛ Web Application: PHP
Successfully developed and implemented the web application and mobile application (Android & iOS) where users can ask & answer the questions, upload & share articles, upload video and voice notes, and upload research papers. Moreover, users can chat with friends one to one and also upload the advertisement.

#mobile #app #case #study #on #mobile-app

Case Study on mobile app : Toya - Prismetric

Toya Application will help farmers find the various Tractor and Equipment from Verified Provider locally without going outside the home.

Client Requirement
Toya User
Develop the multi-language mobile application that helps a farmer find the tractors and equipment from various providers.
Toya Provider
Develop the multi-language mobile application that lets the provider list their tractor and equipment in the application.
Application Features And Functionalities – Toya User
Home/Choose Tractor

This feature user will choose the required tractors and equipment from various listed tractors & equipment.
Post Job/Requirement

Through this feature, the users will choose the location, booking date & time, and required duration.
My Booking

The user will able to see the current, upcoming, and complete booking details.
Rating and Review

The user will able to give ratings and reviews to the provider once the service is done.

The user will able to load the amount into the wallet and pay for the services through the wallet
Refer and Earn

The user will able to refer the friend and able to earn money.

Application Feature And Functionalities - Toya Provider
Home/Listing of Vehicle
Through this feature, the service provider will able to list their vehicle in the application.
My Order
The service provider will able to see the received order (accept/reject), current order, upcoming order, and completed order.
The user will able to earn the provided service amount in the wallet and able to withdraw it into the bank account.
Earning Statistics
The service provider will able to see the monthly and yearly earnings.
Refer and Earn
User will able to refer the friend and able to earn money.
Provider Settings
Invite users and prospective customers
Contact Us
Language Selection
Technical Specification & Implementation
Android: Android Studio with Java

#case #study #on #mobile #app #case-study-on-mobile-app

Case Study on mobile app : Toya - Prismetric

eric stuart


Harry Styles Shirts (Limited Merchdanise) – Harry Styles Merch

Buy Adore You T Shirt from Harry Styles Merch with worldwide free shipping. We have the best quality Harry Styles Merchandise for you and your loved ones!

#shirt #phone #case #hoodie #hat

Harry Styles Shirts (Limited Merchdanise) – Harry Styles Merch

Case study on mobile app; DreamG

Dream-G application will allow user to chat, voice calls and video calls to random people through the mobile application. The User can create a profile and perform all these actions in addition to searching for a person using their name.

Client Requirement
The client came with the requirement of developing a unique mobile application for users to chat with others and make voice and video calls. Furthermore, the user should be able to subscribe to the plan by paying a certain amount.

App Features and Functionalities
The User can see the list of the people and able to view the profile of a particular person and able to chat, voice call, and video call.
The user can see the list of entertainers and can chat, Voice call and Video call them.
User can search for any person by entering the name.
Through the chat option, the user can see the past history of the chat with all the users. The user can also open any chat and again send messages.
The user can see the profile details and able to edit or modify the profile photo, name, and other details. The user can see the call log details.
The user can see the number of coins available with them and through these coins, the user will able to make voice and video calls.
The user can purchase the plan listed in the application according to the requirements, and will be able to chat with the people.
The User can refer the mobile application to other people and earn rewarding coins.

To create a unique user experience for the Chat, Voice, and Video Calls.

Technical Specification & Implementation
Integration with the payment Gateway
Android: Android Studio with Java
We successfully developed and implemented the Dream-G mobile application through which the user will able to chat, voice call, and video call to other people. The user will also be able to purchase the subscription plan and refer the application to other people.

Read more:

#case #study #case-study-on-mobile-app #mobile-app-case-study

Case study on mobile app; DreamG