Noemi  Hintz

Noemi Hintz

1671218520

Building Facial Recognition Time Attendance App in React Native

In this React article, we will learn how to build a facial recognition attendance application in React Native. In this tutorial, we’ll be taking a look at how we can implement an app that uses facial recognition to verify that a student has indeed attended a class.

There are many applications for facial recognition technology. In mobile, it’s mostly used for unlocking the phone or making payments by means of taking a selfie.

Prerequisites

Basic knowledge of React Native is required to follow this tutorial.

This tutorial also assumes you have prior experience with working with bluetooth peripherals from a React Native app. If you’re new to it, be sure to check out my tutorial on creating a realtime attendance app with React Native and BLE. Otherwise, simply replace or skip the BLE integration with something like geolocation as it’s only used for determining whether the user is physically present in a specific place.

The following versions will be used in this tutorial. If you encounter any issues, be sure to try switching to those versions:

  • Node 9.0.0 - required by the BLE peripheral.
  • Node 11.2.0 - used by React Native CLI.
  • Yarn 1.13.0 - used for installing React Native modules and server modules.
  • React Native CLI 2.0.1
  • React Native 0.59.9
  • React Native Camera 2.10.2

For implementing facial recognition, you’ll need a Microsoft Azure account. Simply search “Azure sign up” or go to this page to sign up.

Optionally, you’ll need the following if you want to integrate BLE:

  • BLE Peripheral - this can be any IoT device which have bluetooth, WI-FI, and NodeJS support. For this tutorial, I’m using a Raspberry Pi 3 with Raspbian Stretch Lite installed.

App overview

We will be creating an attendance app with facial recognition features. It will have both server (NodeJS) and client-side (React Native) components.

The server is responsible for registering the faces with Microsoft Cognitive Services’ Face API as well as act as a BLE peripheral. BLE integration is needed to verify that the user is physically in the room. It’s fool proof because unlike the GPS location, it cannot be spoofed.

On the other hand, the app is responsible for the following:

  • Scanning and connecting to a BLE peripheral.
  • Asking for the user’s name.
  • Asking the user to take a selfie to check if their face is registered.

Here’s what the app will look like when you open it:

react-native-facial-recognition-img1

When you connect to a peripheral, it will ask for your full name:

react-native-facial-recognition-img2

After that, it will ask you to take a selfie. When you press on the shutter button, the image is sent to Microsoft Cognitive Services to check if the face is similar to one that is previously registered. If it is, then it responds with the following:

react-native-facial-recognition-img3

You can find the source code in this GitHub repo. The master branch is where all the latest code are, and the starter branch contains the starter code for following this tutorial.

What is Cognitive Services?

Before we proceed, let’s first quickly go over what Cognitive Services is. Cognitive Services is a collection of services that allows developers to easily implement machine learning features to their applications. These services are available via an API which are grouped under the following categories:

  • Vision - for analyzing images and videos.
  • Speech - for converting speech to text and vise-versa.
  • Language - for processing natural language.
  • Decision - for content moderation.
  • Search - for implementing search algorithms that are used on Bing.

Today we’re only concerned about Vision, more specifically the Face API. This API is used for identifying and finding similarities of faces in an image.

Setting up Cognitive Services

In this section, we’ll be setting up Cognitive services in the Azure portal. This section assumes that you already have an Azure account.

First, go to the Azure portal and search for “Cognitive services”. Click on the first result under the Services:

react-native-facial-recognition-img4

Once you’re there, click on the Add button. This will lead you to the page where you can search for the specific cognitive service you want to use:

react-native-facial-recognition-img5

Next, search for “face” and click on the first result:

react-native-facial-recognition-img6

On the page that follows, click on the Create button to add the service:

react-native-facial-recognition-img7

After that, it will ask for the details of the service you want to create. Enter the following details:

  • Name: attendance-app
  • Subscription: Pay-As-You-Go
  • Location: wherever the server nearest to you is
  • Pricing tier: F0 (this is within the free range so you won’t actually get charged)
  • Resource group: click on Create new

react-native-facial-recognition-img8

Enter the details of the resource group you want to add the service to. In this case, I simply put in the name then clicked OK:

react-native-facial-recognition-img9

Once the resource group is created, you can now add the cognitive service. Here’s what it looks like as it’s deploying:

react-native-facial-recognition-img10

Once it’s created, you’ll find it listed under the Cognitive Services:

react-native-facial-recognition-img11

If you click on it, you’ll see overview page. Click on the Show access keys link to see the API keys that you can use to make requests to the API. At the bottom, you can also see the number of API calls that you have made and the total allotted to the pricing tier you chose:

react-native-facial-recognition-img12

Bootstrapping the app

We will only be implementing the face recognition feature in this tutorial so I’ve prepared a starter project which you can clone and start with:

    git clone https://github.com/anchetaWern/RNFaceAttendance
    cd RNFaceAttendance
    git checkout starter
    yarn
    react-native eject
    react-native link react-native-ble-manager
    react-native link react-native-camera
    react-native link react-native-vector-icons
    react-native link react-native-exit-app

Do the same for the server as well:

    cd server
    yarn

Next, update the android/app/build.gradle file and add the missingDimensionStrategy. This is necessary for React Native Camera to work:

    android {
      compileSdkVersion rootProject.ext.compileSdkVersion
    
      compileOptions {
        // ...    
      }
    
      defaultConfig {
        applicationId "com.rnfaceattendance"
        minSdkVersion rootProject.ext.minSdkVersion
        targetSdkVersion rootProject.ext.targetSdkVersion
        versionCode 1
        versionName "1.0"
        missingDimensionStrategy 'react-native-camera', 'general' // add this
      }
    }

The starter project already includes the code for implementing the BLE peripheral and connecting to it.

Building the app

Now we’re ready to start building the app. We’ll first start with the server component. Here are some links to help you along the way as you go through this tutorial:

Server

The server is where we will add the code for registering the faces. We will create an Express server so we can simply access different routes to perform different actions. Start by importing all the modules we need:

    // server/server.js
    const express = require("express");
    const axios = require("axios");
    const bodyParser = require("body-parser");
    const app = express();
    const fs = require('fs')
    app.use(bodyParser.urlencoded({ extended: true }));
    app.use(bodyParser.json());

Next, create the base variable to be used for initializing an axios instance. We will use this later on to make a request to the API. You need to supply a different URL based on your location. You can find the list of locations here. The API key (Ocp-Apim-Subscription-Key) is passed as a header value along with the Content-Type:

    const loc = 'southeastasia.api.cognitive.microsoft.com'; // replace with the server nearest to you
    const key = 'YOUR COGNITIVE SERVICES API KEY';
    const facelist_id = 'class-3e-facelist'; // the ID of the face list we'll be working with
    
    const base_instance_options = {
      baseURL: `https://${loc}/face/v1.0`,
      timeout: 1000,
      headers: {
        'Content-Type': 'application/json',
        'Ocp-Apim-Subscription-Key': key
      }
    };

Next, add the route for creating a face list. This requires you to pass in the unique ID of the face list as a route segment. In this case, we’re setting it as class-3e-facelist. To describe the face list further, we’re also passing in the name:

    app.get("/create-facelist", async (req, res) => {
      try {
        const instance = { ...base_instance_options };
        const facelist_id = 'class-3e-facelist';
        const response = await instance.put(
          `/facelists/${facelist_id}`,
          {
            name: "Classroom 3-E Facelist"
          }
        );
    
        console.log("created facelist: ", response.data);
        res.send('ok');
    
      } catch (err) {
        console.log("error creating facelist: ", err);
        res.send('not ok');
      }
    });

Once the face list is created, we can now proceed to adding faces to it. This time, the Content-Type should be application/octet-stream as opposed to application/json. This is because the specific API endpoint that we’re using requires a file to be passed in the request body:

    app.get("/add-face", async (req, res) => {
      try {
        const instance_options = { ...base_instance_options };
        instance_options.headers['Content-Type'] = 'application/octet-stream';
        const instance = axios.create(instance_options);
    
        const MY_FILE_PATH = './path/to/selfie.png';
        var file_contents = fs.readFileSync(MY_FILE_PATH); // read the contents of the file as array buffer
    
        const response = await instance.post(
          `/facelists/${facelist_id}/persistedFaces`,
          file_contents
        );
    
        console.log('added face: ', response.data);
        res.send('ok');
    
      } catch (err) {
        console.log("err: ", err);
        res.send('not ok');
      }
    });

The code above requires you to change the file name and refresh the page every time you register a new face. But you can also loop through the files in a specific directory and do it all in one go if you want. Just be aware that you might exceed the limits and your requests might get throttled as we’ve selected the free tier earlier.

Mobile app

Now we can proceed to coding the app. Start by importing the additional React Native modules that we need:

    // App.js
    import {
      Platform,
      StyleSheet,
      Text,
      View,
      SafeAreaView,
      PermissionsAndroid,
      NativeEventEmitter,
      NativeModules,
      Button,
      FlatList,
      Alert,
      ActivityIndicator,
      TouchableOpacity // add
    } from 'react-native';
    
    import { RNCamera } from 'react-native-camera'; // for taking selfies
    import base64ToArrayBuffer from 'base64-arraybuffer'; // for converting base64 images to array buffer
    import MaterialIcons from 'react-native-vector-icons/MaterialIcons'; // for showing icons
    import axios from 'axios'; // for making requests to the cognitive services API

Next, add the default configuration for making requests with axios:

    const key = 'YOUR COGNITIVE SERVICES API KEY';
    const loc = 'southeastasia.api.cognitive.microsoft.com'; // replace with the server nearest to you
    
    const base_instance_options = {
      baseURL: `https://${loc}/face/v1.0`,
      timeout: 10000,
      headers: {
        'Content-Type': 'application/json',
        'Ocp-Apim-Subscription-Key': key
      }
    };

Inside the component’s class definition, add the initial value for the visibility of the camera:

    export default class App extends Component {
    
      state = {
        is_scanning: false,
        peripherals: null,
        connected_peripheral: null,
        user_id: '',
        fullname: '',
      
        // add these:
        show_camera: false,
        is_loading: false
      }
    
    }

When the user enters the room, that’s the time we want to show the camera:

    enterRoom = (value) => {
      this.setState({
        user_id: RandomId(15),
        fullname: value,
        show_camera: true 
      });
    }

Next, update the render() method to look like the following:

    render() {
      const { connected_peripheral, is_scanning, peripherals, show_camera, is_loading } = this.state;
    
      return (
        <SafeAreaView style={{flex: 1}}>
          <View style={styles.container}>
            {
              !show_camera &&
              <View style={styles.header}>
                <View style={styles.app_title}>
                  <Text style={styles.header_text}>BLE Face Attendance</Text>
                </View>
                <View style={styles.header_button_container}>
                  {
                    !connected_peripheral &&
                    <Button
                      title="Scan"
                      color="#1491ee"
                      onPress={this.startScan} />
                  }
                </View>
              </View>
            }
    
            <View style={styles.body}>
              {
                !show_camera && is_scanning &&
                <ActivityIndicator size="large" color="#0000ff" />
              }
    
              {
                show_camera &&
                <View style={styles.camera_container}>
                  {
                    is_loading &&
                    <ActivityIndicator size="large" color="#0000ff" />
                  }
    
                  {
                    !is_loading &&
                    <View style={{flex: 1}}>
                      <RNCamera
                        ref={ref => {
                          this.camera = ref;
                        }}
                        style={styles.preview}
                        type={RNCamera.Constants.Type.front}
                        flashMode={RNCamera.Constants.FlashMode.on}
                        captureAudio={false}
                      />
    
                      <View style={styles.camer_button_container}>
                        <TouchableOpacity onPress={this.takePicture} style={styles.capture}>
                          <MaterialIcons name="camera" size={50} color="#e8e827" />
                        </TouchableOpacity>
                      </View>
                    </View>
                  }
    
                </View>
              }
    
              {
                !connected_peripheral && !show_camera &&
                <FlatList
                  data={peripherals}
                  keyExtractor={(item) => item.id.toString()}
                  renderItem={this.renderItem}
                />
              }
    
            </View>
          </View>
        </SafeAreaView>
      );
    }

In the code above, all we’re doing is adding the camera and selectively showing the different components based on its visibility. We only want to show the camera (and nothing else) if show_camera is true because it’s going to occupy the entire screen.

Let’s break down the code for the RNCamera a bit and then we’ll move on. First, we set this.camera to refer to this specific camera component. This allows us to use this.camera later on to perform different operations using the camera. The type is set to front because we’re primarily catering to users taking selfies for attendance. captureAudio is set to false because its default value is true.

    <RNCamera
      ref={ref => {
        this.camera = ref;
      }}
      style={styles.preview}
      type={RNCamera.Constants.Type.front}
      flashMode={RNCamera.Constants.FlashMode.on}
      captureAudio={false}
    />

Next, we now proceed to the code for taking pictures:

    takePicture = async() => {
      if (this.camera) { // check if camera has been initialized
        this.setState({
          is_loading: true
        });
    
        const data = await this.camera.takePictureAsync({ quality: 0.25, base64: true });
        const selfie_ab = base64ToArrayBuffer.decode(data.base64);
        
        try {
          const facedetect_instance_options = { ...base_instance_options };
          facedetect_instance_options.headers['Content-Type'] = 'application/octet-stream';
          const facedetect_instance = axios.create(facedetect_instance_options);
    
          const facedetect_res = await facedetect_instance.post(
            `/detect?returnFaceId=true&detectionModel=detection_02`,
            selfie_ab
          );
    
          console.log("face detect res: ", facedetect_res.data);
         
          if (facedetect_res.data.length) {
    
            const findsimilars_instance_options = { ...base_instance_options };
            findsimilars_instance_options.headers['Content-Type'] = 'application/json';
            const findsimilars_instance = axios.create(findsimilars_instance_options);
            const findsimilars_res = await findsimilars_instance.post(
              `/findsimilars`,
              {
                faceId: facedetect_res.data[0].faceId,
                faceListId: 'wern-faces-01',
                maxNumOfCandidatesReturned: 2,
                mode: 'matchPerson'
              }
            );
    
            console.log("find similars res: ", findsimilars_res.data);
            this.setState({
              is_loading: false
            });
    
            if (findsimilars_res.data.length) {
              Alert.alert("Found match!", "You've successfully attended!");
              this.attend();
    
            } else {
              Alert.alert("No match", "Sorry, you are not registered");
            }
    
          } else {
            Alert.alert("error", "Cannot find any face. Please make sure there is sufficient light when taking a selfie");
          }
    
        } catch (err) {
          console.log("err: ", err);
          this.setState({
            is_loading: false
          });
        }
      }
    }

Breaking down the code above, we first take a picture using the this.camera.takePictureAsync(). This accepts an object containing the options for the picture to be taken. In this case, we’re setting the quality to 0.25 (25% of the maximum quality). This ensures that the API won’t reject our image because of its size. Play with this value to ensure that the images passes the size limit validation by the API but at the same time, it has enough quality for the API to be able to recognize the faces clearly. base64 is set to true which means that data will contain the base64 representation of the image once the response is available. After that, we use the base64ToArrayBuffer library to convert the image to a format understandable by the API:

    const data = await this.camera.takePictureAsync({ quality: 0.25, base64: true });
    const selfie_ab = base64ToArrayBuffer.decode(data.base64);

Next, we make the request to the API. This is pretty much the same as what we did in the server earlier. Only this time, we’re sending it to the /detect endpoint. This detects faces in a picture and returns the position of the different face landmarks (eyes, nose, mouth).

We’re also passing in additional parameters such as returnFaceId which is a unique ID assigned to the detected face. On the other hand, detectionModel is set to detection_02 because it’s better than the default option (detection_01) when it comes to detecting faces in a slightly side view position and blurry faces as well. Do note that unlike the default option, this detection model won’t return the different landmarks (position of eyes, nose, mouth):

    const facedetect_instance_options = { ...base_instance_options };
    facedetect_instance_options.headers['Content-Type'] = 'application/octet-stream';
    const facedetect_instance = axios.create(facedetect_instance_options);
    
    const facedetect_res = await facedetect_instance.post(
      `/detect?returnFaceId=true&detectionModel=detection_02`,
      selfie_ab
    );

If a face is detected, we make another request to the API. This time it’s for checking if the face detected earlier has a match within the face list we created on the server. This time, we’ll only need to send JSON data so the Content-Type is set to application/json . The endpoint is /findsimilars and it requires the faceId and faceListId to be passed in the request body. faceId is the unique ID assigned to the face detected earlier, and faceListId is the ID of the face list we created earlier on the server. maxNumOfCandidatesReturned and mode are optional:

    if (facedetect_res.data.length) {
      const findsimilars_instance_options = { ...base_instance_options };
      findsimilars_instance_options.headers['Content-Type'] = 'application/json';
      const findsimilars_instance = axios.create(findsimilars_instance_options);
      const findsimilars_res = await findsimilars_instance.post(
        `/findsimilars`,
        {
          faceId: facedetect_res.data[0].faceId,
          faceListId: faceListId,
          maxNumOfCandidatesReturned: 2, // the maximum number of matches to return
          mode: 'matchPerson' // the default mode. This tries to find faces of the same person as possible by using internal same-person thresholds
        }
      );
       
      // rest of the code..
    }

If the above request returns something, it means that the person who took the selfie has their face registered previously. Each match returns a confidence level ranging between 0 and 1. The higher the confidence level, the more similar the faces are. There’s currently no way of specifying the threshold for this one (for example: only return matches which has above 80% confidence level) so we’re stuck with the defaults.

Lastly, here are the additional styles for the camera component:

    camera_container: {
      flex: 1,
      flexDirection: 'column',
      backgroundColor: 'black'
    },
    preview: {
      flex: 1,
      justifyContent: 'flex-end',
      alignItems: 'center',
    },
    camer_button_container: {
      flex: 0,
      flexDirection: 'row',
      justifyContent: 'center',
      backgroundColor: '#333'
    }

Running the app

At this point you’re now ready to run the app:

    nodemon server/server.js
    react-native run-android
    react-native run-ios

Start by creating a face list (raspberrypi.local/create-facelist on mine), then add faces to it (raspberrypi.local/add-face). Once you’ve added the faces, you can now run the app and scan for peripherals. Connect to the peripheral that’s listed and it will ask you to enter your full name. After that, take a selfie and wait for the API to respond.

Conclusion

In this tutorial, you learned how to use Microsoft Cognitive Services to create an attendance app which uses facial recognition to identify people. Specifically, you learned how to use React Native Camera and convert its response to a format that can be understood by the API.

Original article sourced at: https://pusher.com/tutorials

#react-native 

What is GEEK

Buddha Community

Building Facial Recognition Time Attendance App in React Native
Autumn  Blick

Autumn Blick

1598839687

How native is React Native? | React Native vs Native App Development

If you are undertaking a mobile app development for your start-up or enterprise, you are likely wondering whether to use React Native. As a popular development framework, React Native helps you to develop near-native mobile apps. However, you are probably also wondering how close you can get to a native app by using React Native. How native is React Native?

In the article, we discuss the similarities between native mobile development and development using React Native. We also touch upon where they differ and how to bridge the gaps. Read on.

A brief introduction to React Native

Let’s briefly set the context first. We will briefly touch upon what React Native is and how it differs from earlier hybrid frameworks.

React Native is a popular JavaScript framework that Facebook has created. You can use this open-source framework to code natively rendering Android and iOS mobile apps. You can use it to develop web apps too.

Facebook has developed React Native based on React, its JavaScript library. The first release of React Native came in March 2015. At the time of writing this article, the latest stable release of React Native is 0.62.0, and it was released in March 2020.

Although relatively new, React Native has acquired a high degree of popularity. The “Stack Overflow Developer Survey 2019” report identifies it as the 8th most loved framework. Facebook, Walmart, and Bloomberg are some of the top companies that use React Native.

The popularity of React Native comes from its advantages. Some of its advantages are as follows:

  • Performance: It delivers optimal performance.
  • Cross-platform development: You can develop both Android and iOS apps with it. The reuse of code expedites development and reduces costs.
  • UI design: React Native enables you to design simple and responsive UI for your mobile app.
  • 3rd party plugins: This framework supports 3rd party plugins.
  • Developer community: A vibrant community of developers support React Native.

Why React Native is fundamentally different from earlier hybrid frameworks

Are you wondering whether React Native is just another of those hybrid frameworks like Ionic or Cordova? It’s not! React Native is fundamentally different from these earlier hybrid frameworks.

React Native is very close to native. Consider the following aspects as described on the React Native website:

  • Access to many native platforms features: The primitives of React Native render to native platform UI. This means that your React Native app will use many native platform APIs as native apps would do.
  • Near-native user experience: React Native provides several native components, and these are platform agnostic.
  • The ease of accessing native APIs: React Native uses a declarative UI paradigm. This enables React Native to interact easily with native platform APIs since React Native wraps existing native code.

Due to these factors, React Native offers many more advantages compared to those earlier hybrid frameworks. We now review them.

#android app #frontend #ios app #mobile app development #benefits of react native #is react native good for mobile app development #native vs #pros and cons of react native #react mobile development #react native development #react native experience #react native framework #react native ios vs android #react native pros and cons #react native vs android #react native vs native #react native vs native performance #react vs native #why react native #why use react native

Top 10 React Native App Development Companies in USA

React Native is the most popular dynamic framework that provides the opportunity for Android & iOS users to download and use your product. Finding a good React Native development company is incredibly challenging. Use our list as your go-to resource for React Native app development Companies in USA.

List of Top-Rated React Native Mobile App Development Companies in USA:

  1. AppClues Infotech
  2. WebClues Infotech
  3. AppClues Studio
  4. WebClues Global
  5. Data EximIT
  6. Apptunix
  7. BHW Group
  8. Willow Tree:
  9. MindGrub
  10. Prismetric

A Brief about the company details mentioned below:

1. AppClues Infotech
As a React Native Mobile App Development Company in USA, AppClues Infotech offers user-centered mobile app development for iOS & Android. Since their founding in 2014, their React Native developers create beautiful mobile apps.

They have a robust react native app development team that has high knowledge and excellent strength of developing any type of mobile app. They have successfully delivered 450+ mobile apps as per client requirements and functionalities.
Website: https://www.appcluesinfotech.com/

2. WebClues Infotech
WebClues Infotech is the Top-Notch React Native mobile app development company in USA & offering exceptional service worldwide. Since their founding in 2014, they have completed 950+ web & mobile apps projects on time.

They have the best team of developers who has an excellent knowledge of developing the most secure, robust & Powerful React Native Mobile Apps. From start-ups to enterprise organizations, WebClues Infotech provides top-notch React Native App solutions that meet the needs of their clients.
Website: https://www.webcluesinfotech.com/

3. AppClues Studio
AppClues Studio is one of the top React Native mobile app development company in USA and offers the best service worldwide at an affordable price. They have a robust & comprehensive team of React Native App developers who has high strength & extensive knowledge of developing any type of mobile apps.
Website: https://www.appcluesstudio.com/

4. WebClues Global
WebClues Global is one of the best React Native Mobile App Development Company in USA. They provide low-cost & fast React Native Development Services and their React Native App Developers have a high capability of serving projects on more than one platform.

Since their founding in 2014, they have successfully delivered 721+ mobile app projects accurately. They offer versatile React Native App development technology solutions to their clients at an affordable price.
Website: https://www.webcluesglobal.com/

5. Data EximIT
Hire expert React Native app developer from top React Native app development company in USA. Data EximIT is providing high-quality and innovative React Native application development services and support for your next projects. The company has been in the market for more than 8 years and has already gained the trust of 553+ clients and completed 1250+ projects around the globe.

They have a large pool of React Native App developers who can create scalable, full-fledged, and appealing mobile apps to meet the highest industry standards.
Website: https://www.dataeximit.com/

6. Apptunix
Apptunix is the best React Native App Development Company in the USA. It was established in 2013 and vast experience in developing React Native apps. After developing various successful React Native Mobile Apps, the company believes that this technology helps them incorporate advanced features in mobile apps without influencing the user experience.
Website: https://www.apptunix.com/

7. BHW Group
BHW Group is a Top-Notch React Native Mobile App Development Company in the USA. The company has 13+ years of experience in providing qualitative app development services to clients worldwide. They have a compressive pool of React Native App developers who can create scalable, full-fledged, and creative mobile apps to meet the highest industry standards.
Website: https://thebhwgroup.com/

8. Willow Tree:
Willow Tree is the Top-Notch React Native Mobile App Development Company in the USA & offering exceptional React Native service. They have the best team of developers who has an excellent knowledge of developing the most secure, robust & Powerful React Native Mobile Apps. From start-ups to enterprise organizations, Willow Tree has top-notch React Native App solutions that meet the needs of their clients.
Website: https://willowtreeapps.com/

9. MindGrub
MindGrub is a leading React Native Mobile App Development Company in the USA. Along with React Native, the company also works on other emerging technologies like robotics, augmented & virtual reality. The Company has excellent strength and the best developers team for any type of React Native mobile apps. They offer versatile React Native App development technology solutions to their clients.
Website: https://www.mindgrub.com/

10. Prismetric
Prismetric is the premium React Native Mobile App Development Company in the USA. They provide fast React Native Development Services and their React Native App Developers have a high capability of serving projects on various platforms. They focus on developing customized solutions for specific business requirements. Being a popular name in the React Native development market, Prismetric has accumulated a specialty in offering these services.
Website: https://www.prismetric.com/

#top rated react native app development companies in usa #top 10 react native app development companies in usa #top react native app development companies in usa #react native app development technologies #react native app development #hire top react native app developers in usa

Hire Top-Notch React Native App Developers in USA

Do you want to hire talented & highly skilled React Native mobile app developers in USA? AppClues Infotech has the best team of dedicated React Native App designers & developers that provide the complete React Native solution & listed the top-notch USA-based companies list for your kind information.

For more info:
Website: https://www.appcluesinfotech.com/
Email: info@appcluesinfotech.com
Call: +1-978-309-9910

#hire best react native app developers in usa #hire react native mobile app developers #top react native development companies #top react native app development company in usa #best react native app development services provider company #custom react native app development company

Top Rated React Native Development Agency

AppClues Infotech is a premier & leading React Native app Development Company in USA having highly skilled React Native app developers offering robust services with the latest technology & functionalities.

For more info:
Website: https://www.appcluesinfotech.com/
Email: info@appcluesinfotech.com
Call: +1-978-309-9910

#react native app development company #top react native app development company in usa #best react native app development services usa #react native mobile development #react native mobile development #top react native app development companies usa

React Native Mobile App Development

React Native is a framework that allows you to build mobile applications for both Android and iOS platforms using a similar codebase. This can shorten the development time and reduce the overall cost of building mobile apps.

Looking to create Mobile Applications but low in the budget? Contact Skenix Infotech now to get the most effective React Native Mobile App Development Services.

#react native development #mobile app development #react native ios app #react native ui #react native mobile app #react native