1669631700
In this ServiceNow interview questions blog, I have collected the most frequently asked questions by interviewers. If you wish to brush up your ServiceNow basics, then I would recommend you take a look at this video first. This video will introduce you to ServiceNow basics and hold you in good state to get started with this ‘ServiceNow Interview Questions’ blog
In case you have attended a ServiceNow interview in the recent past, do paste those ServiceNow interview questions in the comments section and we’ll answer them ASAP. So let us not waste any time and quickly start with this compilation of ServiceNow Interview Questions.
I have divided these questions in two sections:
So let us start then,
ServiceNow is a cloud based IT Service Management (ITSM) tool. It provides a single system of record for:
All aspects of IT Services live in the ServiceNow ecosystem. It gives us a complete view of services and resources. This allows for broad control of how to best allocate resources and design the process flow of those services.Refer this link to know more What Is ServiceNow?
Applications in ServiceNow represent packaged solutions for delivering services and managing business processes. In simple words it is a group of modules which provides information related to those modules. For example Incident application will provide information related to Incident Management process.
CMDB stands for Configuration Management Database. CMDB is a repository. It acts as a data warehouse for information technology installations. It holds data related to a collection of IT assets, and descriptive relationships between such assets.
LDAP is Light weight Directory Access Protocol. You can use it for user data population and user authentication. ServiceNow integrates with LDAP directory to streamline the user log in process and to automate the creation of user and assigning them roles.
Data lookup and record matching feature helps to set a field value based on some condition instead of writing scripts.
For example:
On Incident forms, the priority lookup rules sample data automatically. Then, set the incident Priority based on the incident Impact and Urgency values. Data lookup rules allow to specify the conditions and fields where they want data lookup to occur.
CMDB Baselines will help you, understand and control the changes made to a configuration Item(CI). These Baselines act as a snapshot of a CI.
Following steps will help you do the same:
View defines the arrangement of fields on a form or a list. For one single form we can define multiple views according to the user preferences or requirement.
An ACL is access control list that defines what data users can access and how they can access it in ServiceNow.
Impersonating a user means giving the administrator access to what the user would have access to. This includes the same menus and modules. ServiceNow records the administrator activities when the user impersonates another user. This feature helps in testing. You can impersonate that user and can test instead of logging out from your session and logging again with the user credentials.
Dictionary overrides provide the ability to define a field on an extended table differently from the field on the parent table. For example, for a field on the Task [task] table, a dictionary override can change the default value on the Incident [incident] table without affecting the default value on Task [task] or Change [change].
Coalesce is a property of a field that we use in transform map field mapping. Coalescing on a field (or set of fields) lets you use the field as a unique key. If a match is found using the coalesce field, the existing record will be updated with the information being imported. If a match is not found, then a new record will be inserted into the database.
UI policies dynamically change information on a form and control custom process flows for tasks. UI policies are alternative to client scripts. You can use UI policies to set mandatory fields,which are read only and visible on a form. You can also use UI policy for dynamically changing a field on a form.
With data policies, you can enforce data consistency by setting mandatory and read-only states for fields. Data policies are similar to UI policies, but UI policies only apply to data entered on a form through the standard browser. Data policies can apply rules to all data entered into the system, including data brought in through email, import sets or web services and data entered through the mobile UI.
Client script sits on the client side(the browser) and runs on client side only.Following are the types of client script:
In order to cancel a form submission the onSubmit function should return false. Refer the below mentioned syntax:
function onSubmit() { return false; }
Business rule is a server side script. It executes each time a record is inserted, updated, deleted, displayed or queried. The key thing to note while creating a business rule is, when and on what action it has to be executed. The business can be run or executed for following states
Yes, it is possible to call a business rule through a client script. You can use glide ajax for the same.
The Task table is the parent table of Incident, Problem & Change. It makes sure any fields, or configurations defined on the parent table automatically apply to the child tables.
A catalog item that allows users to create task-based records from the Service Catalog is called as a record producer. For example, creating a change record or a problem record using record producer. Record producers provide an alternative way to create records through Service Catalog
Glide record is a java class. It is used for performing database operations instead of writing SQL queries.
An import set is a tool that imports data from various data sources and, then maps that data into ServiceNow tables using transform map. It acts as a staging table for records imported.
A transform map transforms the record imported into ServiceNow import set table to the target table. It also determines the relationships between fields displaying in an Import Set table and fields in a target table.
When an import makes a change to a table that is not the target table for that import, this is when we say foreign record insert occurs. This happens when updating a reference field on a table.
Zing is the text indexing and search engine that performs all text searches in ServiceNow.
It is used to enhance the system logs. It provides more information on the duration of transactions between the client and the server.
It triggers an event for a task record if the task is inactive for a certain period of time. If the task remains inactive, the monitor repeats at regular intervals.
Domain separation is a way to separate data into logically-defined domains. For example a client ABC has two businesses and they are using ServiceNow single instance. They do not want users from one business to see data of other business. Here we can configure domain separation to isolate the records from both business.
You can set the property – “glide.ui.forgetme” to true to remove the ‘Remember me’ check box from login page.
The HTML Sanitizer is used to automatically clean up HTML markup in HTML fields and removes unwanted code and protect against security concerns such as cross-site scripting attacks. The HTML sanitizer is active for all instances starting with the Eureka release.
Check box is used to select whether the variables used should cascade, which passes their values to the ordered items. If this check box is cleared, variable information entered in the order guide is not passed on to ordered items.
A gauge is visible on a ServiceNow homepage and can contain up-to-the-minute information about current status of records that exists on ServiceNow tables. A gauge can be based on a report. It can be put on a homepage or a content page.
Metrics, record and measure the workflow of individual records. With metrics, customers can arm their process by providing tangible figures to measure. For example, how long it takes before a ticket is reassigned.
Following searches will help you find information in ServiceNow:
Lists: Find records in a list;
Global text search: Finds records in multiple tables from a single search field.
Knowledge base: Finds knowledge articles.
Navigation filter: Filters the items in the application navigator.
Search screens: Use a form like interface to search for records in a table. Administrators can create these custom modules.
BSM Map is a Business Service Management map. It graphically displays the Configuration Items (CI). These items support a business service and indicates the status of those Configuration Items.
Each update set is stored in the Update Set [sys_update_set] table. The customizations that are associated with the update set, are stored in [sys_update_xml] table.
If the Default update set is marked Complete, the system creates another update set named Default1 and uses it as the default update set.
Homepages and content pages don’t get added to ‘update sets’ by default. You need to manually add pages to the current ‘update sets’ by unloading them.
Reference qualifiers restricts the data, that can be selected for a reference field.
Performance Analytics is an additional application in ServiceNow that allows customers to take a snapshot of data at regular intervals and create time series for any Key Performance Indicator (KPI) in the organization.
The latest user interface is UI16 interface. It came in Helsinki release.
It is a unique 32-character GUID that identifies each record created in each table in ServiceNow.
A scorecard measures the performance of an employee or a business process. It is a graphical representation of progress over time. A scorecard belongs to an indicator. The first step is to define the indicators that you want to measure. You can enhance scorecards by adding targets, breakdowns (scores per group), aggregates, and time series.
Yes, you can do it by using a function autoSysFields() in your server side scripting. Whenever you are updating a record set the autoSysFields() to false.
Consider following Example:
var gr = new GlideRecord(‘incident’);
gr.query();
if(gr.next()){
gr.autoSysFields(false);
short_description = “Test from Examsmyntra” ;
gr.update();
}
Reference qualifier is used to restrict the data that is select able for a reference field.
Performance Analytics is an additional application in ServiceNow that allows customers to take a snapshot of data at regular intervals and create time series for any key performance indicator (KPI) in the organisation.
Navigate to User Administration > Role and click New.
You can, but there is no guarantee of sequencing. You cannot predict what order your event handlers will run.
You can use addActiveQuery() method to get all the active records and addInactiveQuery() to get the all inactive records.
next() method is responsible to move to the next record in GlideRecord. _next() provides the same functionality as next(), intended to be used in cases when we query the table having a column name as next.
So this brings us to the end of the blog. I hope you enjoyed these ServiceNow Interview Questions. The topics that you learnt in this ServiceNow Interview questions blog are the most sought-after skill sets that recruiters look for in a ServiceNow Professional.
You can also check out our ServiceNow YouTube playlist:
www.youtube.com/playlist?list=PL9ooVrP1hQOGOrWF7soRFiTVepwQI6Dfw
In case if you wish to build a career in ServiceNow then check out our ServiceNow Certification Training.
Got a question for us? Please mention it in the comments section of this ServiceNow Interview Questions and we will get back to you.
Original article source at: https://www.edureka.co/
1669623000
Top 50 Docker Interview Questions You Must Prepare
Introduced in 2013, Docker hit the IT industry. It turned out to be a big hit with 13 billion + container image downloads per month in 2022. Increasing demand for docker showed an exponential increase in job openings. Go ahead and take advantage of all the new job openings with this article which lists down 50 most important Docker Interview Questions.
I have categorized these 50 questions into:
This category of Docker Interview Questions consists of questions that you’re expected to know. These are the most basic questions. An interviewer will start with these and eventually increase the difficulty level. Let’s have a look at them.
1. What is Hypervisor?
A hypervisor is a software that makes virtualization possible. It is also called Virtual Machine Monitor. It divides the host system and allocates the resources to each divided virtual environment. You can basically have multiple OS on a single host system. There are two types of Hypervisors:
Virtualization is the process of creating a software-based, virtual version of something(compute storage, servers, application, etc.). These virtual versions or environments are created from a single physical hardware system. Virtualization lets you split one system into many different sections which act like separate, distinct individual systems. A software called Hypervisor makes this kind of splitting possible. The virtual environment created by the hypervisor is called Virtual Machine.
Let me explain this is with an example. Usually, in the software development process, code developed on one machine might not work perfectly fine on any other machine because of the dependencies. This problem was solved by the containerization concept. So basically, an application that is being developed and deployed is bundled and wrapped together with all its configuration files and dependencies. This bundle is called a container. Now when you wish to run the application on another system, the container is deployed which will give a bug-free environment as all the dependencies and libraries are wrapped together. Most famous containerization environments are Docker and Kubernetes.
4. Difference between virtualization and containerization
Once you’ve explained containerization and virtualization, the next expected question would be differences. The question could either be differences between virtualization and containerization or differences between virtual machines and containers. Either way, this is how you respond.
Containers provide an isolated environment for running the application. The entire user space is explicitly dedicated to the application. Any changes made inside the container is never reflected on the host or even other containers running on the same host. Containers are an abstraction of the application layer. Each container is a different application.
Whereas in Virtualization, hypervisors provide an entire virtual machine to the guest(including Kernal). Virtual machines are an abstraction of the hardware layer. Each VM is a physical machine.
Since its a Docker interview, there will be an obvious question about what is Docker. Start with a small definition.
Docker is a containerization platform which packages your application and all its dependencies together in the form of containers so as to ensure that your application works seamlessly in any environment, be it development, test or production. Docker containers, wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries, etc. It wraps basically anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
Docker containers include the application and all of its dependencies. It shares the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud. Docker containers are basically runtime instances of Docker images.
When you mention Docker images, your very next question will be “what are Docker images”.
Docker image is the source of Docker container. In other words, Docker images are used to create containers. When a user runs a Docker image, an instance of a container is created. These docker images can be deployed to any Docker environment.
Docker images create docker containers. There has to be a registry where these docker images live. This registry is Docker Hub. Users can pick up images from Docker Hub and use them to create customized images and containers. Currently, the Docker Hub is the world’s largest public repository of image containers.
Docker Architecture consists of a Docker Engine which is a client-server application with three major components:
Refer to this blog, to read more about Docker Architecture.
Let’s start by giving a small explanation of Dockerfile and proceed by giving examples and commands to support your arguments.
Docker can build images automatically by reading the instructions from a file called Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build, users can create an automated build that executes several command-line instructions in succession.
The interviewer does not just expect definitions, hence explain how to use a Dockerfile which comes with experience. Have a look at this tutorial to understand how Dockerfile works.
Docker Compose is a YAML file which contains details about the services, networks, and volumes for setting up the Docker application. So, you can use Docker Compose to create separate containers, host them and get them to communicate with each other. Each container will expose a port for communicating with other containers.
You are expected to have worked with Docker Swarm as it’s an important concept of Docker.
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
A namespace is one of the Linux features and an important concept of containers. Namespace adds a layer of isolation in containers. Docker provides various namespaces in order to stay portable and not affect the underlying host system. Few namespace types supported by Docker – PID, Mount, IPC, User, Network
This is one of the most popular questions asked in Docker interviews. Docker containers have the following lifecycle:
Docker machine is a tool that lets you install Docker Engine on virtual hosts. These hosts can now be managed using the docker-machine commands. Docker machine also lets you provision Docker Swarm Clusters.
Once you’ve aced the basic conceptual questions, the interviewer will increase the difficulty level. So let’s move on to the next section of this Docker Interview Questions article. This section talks about the commands that are very common amongst docker users.
The following command gives you information about Docker Client and Server versions:
$ docker version
You can use the following command to get detailed information about the docker installed on your system.
$ docker info
You can get the number of containers running, paused, stopped, the number of images and a lot more.
The following command is very useful as it gives you help on how to use a command, the syntax, etc.
$ docker --help
The above command lists all Docker commands. If you need help with one specific command, you can use the following syntax:
$ docker <command> --help
You can use the following command to login into hub.docker.com:
$ docker login
You’ll be prompted for your username and password, insert those and congratulations, you’re logged in.
You pull an image from docker hub onto your local system
It’s one simple command to pull an image from docker hub:
$ docker pull <image_name>
Pull an image from docker repository with the above command and run it to create a container. Use the following command:
$ docker run -it -d <image_name>
Most probably the next question would be, what does the ‘-d’ flag mean in the command?
-d means the container needs to start in the detached mode. Explain a little about the detach mode. Have a look at this blog to get a better understanding of different docker commands.
The following command lists down all the running containers:
$ docker ps
The following command lets us access a running container:
$ docker exec -it <container id> bash
The exec command lets you get inside a container and work with it.
24. How to start, stop and kill a container?
The following command is used to start a docker container:
$ docker start <container_id>
and the following for stopping a running container:
$ docker stop <container_id>
kill a container with the following command:
$ docker kill <container_id>
Of course, you can use a container, edit it and update it. This sounds complicated but its actually just one command.
$ docker commit <conatainer id> <username/imagename>
$ docker push <username/image name>
Use the following command to delete a stopped container:
$ docker rm <container id>
The following command lets you delete an image from the local system:
$ docker rmi <image-id>
Once you’ve written a Dockerfile, you need to build it to create an image with those specifications. Use the following command to build a Dockerfile:
$ docker build <path to docker file>
The next question would be when do you use “.dockerfile_name” and when to use the entire path?
Use “.dockerfile_name” when the dockerfile exits in the same file directory and you use the entire path if it lives somewhere else.
$ docker system prune
The above command is used to remove all the stopped containers, all the networks that are not used, all dangling images and all build caches. It’s one of the most useful docker commands.
Once the interviewer knows that you’re familiar with the Docker commands, he/she will start asking about practical applications This section of Docker Interview Questions consists of questions that you’ll only be able to answer when you’ve gained some experience working with Docker.
No, you won’t lose any data when Docker container exits. Any data that your application writes to the container gets preserved on the disk until you explicitly delete the container. The file system for the container persists even after the container halts.
When asked such a question, respond by talking about applications of Docker. Docker is being used in the following areas:
Docker containers are very easy to deploy in any cloud platform. It can get more applications running on the same hardware when compared to other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.
If you have some more points to add you can do that but make sure the above explanation is there in your answer.
You can use JSON instead of YAML for your compose file, to use JSON file with compose, specify the JSON filename to use, for eg:
$ docker-compose -f docker-compose.json up
Explain how you have used Docker to help rapid deployment. Explain how you have scripted Docker and used it with other tools like Puppet, Chef or Jenkins. If you have no past practical experience in Docker and instead have experience with other tools in a similar space, be honest and explain the same. In this case, it makes sense if you can compare other tools to Docker in terms of functionality.
Large web deployments like Google and Twitter and platform providers such as Heroku and dotCloud, all run on container technology. Containers can be scaled to hundreds of thousands or even millions of them running in parallel. Talking about requirements, containers require the memory and the OS at all the times and a way to use this memory efficiently when scaled.
This is a very straightforward question but can get tricky. Do some company research before going for the interview and find out how the company is using Docker. Make sure you mention the platform company is using in this answer.
Docker runs on various Linux administration:
It can also be used in production with Cloud platforms with the following services:
There are six possible states a container can be at any given point – Created, Running, Paused, Restarting, Exited, Dead.
Use the following command to check for docker state at any given point:
$ docker ps
The above command lists down only running containers by default. To look for all containers, use the following command:
$ docker ps -a
The answer is no. You cannot remove a paused container. The container has to be in the stopped state before it can be removed.
No, it’s not possible for a container to restart by itself. By default the flag -restart is set to false.
Its always better to stop the container and then remove it using the remove command.
$ docker stop <coontainer_id>
$ docker rm -f <container_id>
Stopping the container and then removing it will allow sending SIG_HUP signal to recipients. This will ensure that all the containers have enough time to clean up their tasks. This method is considered a good practice, avoiding unwanted errors.
Docker containers are gaining popularity but at the same time, Cloud services are giving a good fight. In my personal opinion, Docker will never be replaced by Cloud. Using cloud services with containerization will definitely hype the game. Organizations need to take their requirements and dependencies into consideration into the picture and decide what’s best for them. Most of the companies have integrated Docker with the cloud. This way they can make the best out of both the technologies.
There can be as many containers as you wish per host. Docker does not put any restrictions on it. But you need to consider every container needs storage space, CPU and memory which the hardware needs to support. You also need to consider the application size. Containers are considered to be lightweight but very dependant on the host OS.
The concept behind stateful applications is that they store their data onto the local file system. You need to decide to move the application to another machine, retrieving data becomes painful. I honestly would not prefer running stateful applications on Docker.
The answer is yes. Docker compose always runs in the dependency order. These dependencies are specifications like depends_on, links, volumes_from, etc.
Docker provides functionalities like docker stats and docker events to monitor docker in production. Docker stats provides CPU and memory usage of the container. Docker events provide information about the activities taking place in the docker daemon.
Yes, using docker compose in production is the best practical application of docker compose. When you define applications with compose, you can use this compose definition in various production stages like CI, staging, testing, etc.
These are the following changes you need make to your compose file before migrating your application to the production environment:
Be very honest in such questions. If you have used Kubernetes, talk about your experience with Kubernetes and Docker Swarm. Point out the key areas where you thought docker swarm was more efficient and vice versa. Have a look at this blog for understanding differences between Docker and Kubernetes.
You Docker interview questions are not just limited to the workarounds of docker but also other similar tools. Hence be prepared with tools/technologies that give Docker competition. One such example is Kubernetes.
While using docker service with multiple containers across different hosts, you come across the need to load balance the incoming traffic. Load balancing and HAProxy is basically used to balance the incoming traffic across different available(healthy) containers. If one container crashes, another container should automatically start running and the traffic should be re-routed to this new running container. Load balancing and HAProxy works around this concept.
This brings us to the end of the Docker Interview Questions article. With increasing business competition, companies have realized the importance of adapting and taking advantage of the changing market. Few things that kept them in the game were faster scaling of systems, better software delivery, adapting to new technologies, etc. That’s when docker swung into the picture and gave these companies boosting support to continue the race.
If you want to learn more about DevOps, check out the DevOps training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka DevOps Certification Training course helps learners gain expertise in various DevOps processes and tools such as Puppet, Jenkins, Nagios and GIT for automating multiple steps in SDLC.
Original article source at: https://www.edureka.co/
1660788420
In this tutorial, we show you the most asked Linux Interview Questions and Answers. The most common and important Linux Interview Questions and their Answers. If you are looking to become a Linux Administrator, then this is the right place for you to prepare for the interview.
Did you know that more than 90% of the World’s Fastest Computers use Linux? No doubt why! Linux is fast, powerful, and a techies’ favorite. If you are looking to become a Linux Administrator, then this is the right place for you to prepare for the interview. In this article, I will be discussing some of the most common and important Linux Interview Questions and their Answers.
Preparing for your interview with this list of questions and answers will astonish your interviewer and help you get your Linux job.
The following are the questions that are answered in the video:
⏰Timestamps⏰
00:00 Intro
00:28 #1 What do you mean by Linux?
00:58 #2 What are the basic components of Linux?
02:36 #3 What do you mean by LILO?
03:24 #4 What do you mean by Linux shell, and what types of shells are there?
04:22 #5 What do you know and mean by Daemons?
05:05 #6 What are some of the differences between Cron and Anacron?
06:23 #7 What do you mean by Load Average in Linux?
06:53 #8 What are CLI and GUI?
07:51 #9 What do you mean by SSH? How can we use it?
08:29 #10 What do you know about file permissions in Linux?
09:09 #11 Name some of the Linux directory commands.
09:56 #12 What is the difference between rmdir and rm -r?
10:38 #13 What do you mean by Desktop Environment?
11:16 #14 How can you copy a file in Linux?
12:16 #15 What is the difference between the cp command and the mv command?
12:58 #16 What should you do when a program fails to execute or how do you kill a process or program in Linux?
14:08 #17 How can you create a file without opening it, or how can you create a file from the terminal?
15:00 #18 Why do we use the Export command?
15:26 #19 What is the Top command, and why do we use it?
16:17 #20 What do you mean by Shell Script?
16:45 Outro
This Linux Interview Questions blog is divided into two parts: Part A-Theoretical Questions and Part B-Scenario Based Questions. Let’s get started!
In this part of Linux Interview Questions, we will discuss the most common theoretical and concept based questions.
Linux is an Open-Source Operating System based on Unix. Linux was first introduced by Linus Torvalds. The main purpose of Linux was to provide free and low-cost Operating System for users who could not afford Operating Systems like Windows or iOS or Unix.
The main differences between Linux and UNIX are as follows:
Parameter | Linux | Unix |
Price | Both free distributions and paid distributions are available. | Different levels of UNIX have a different cost structure |
Target User | Everyone (Home user, Developer, etc.) | Mainly Internet Server, Workstations, Mainframes. |
File System Support | Ext2, Ext3, Ext4, Jfs, ReiserFS, Xfs, Btrfs, FAT, FAT32, NTFS. | jfs, gpfs, hfs, hfs+, ufs, xfs, zfs,vxfs. |
GUI | KDE and Gnome | Common Desktop Environment |
Viruses listed | 60-100 | 80-120 |
Bug Fix Speed | Faster because Linux is Community driven | Slow |
Portability | Yes | No |
Examples | Ubuntu, Fedora, Red Hat, Kali Linux, Debian, Archlinux, Android, etc. | OS X, Solaris, All Linux |
Linux vs. Unix – Linux Interview Questions
Linux kernel refers to the low-level system software. It is used to manage resources and provide an interface for user interaction.
Yes, it is legal to edit Linux Kernel. Linux is released under the General Public License (General Public License). Any project released under GPL can be modified and edited by the end users.
LILO stands for LInux LOader. LILO is a Linux Boot Loader that loads Linux Operating System into the main memory to begin execution. Most of the computers come with boot loaders for certain versions of Windows or Mac OS. So, when you want to use Linux OS, you need to install a special boot loader for it. LILO is one such boot loader.
When the computer is started, BIOS conducts some initial tests and transfers control to the Master Boot Record. From here, LILO loads the Linux OS and starts it.
The advantage of using LILO is that it allows fast boot of Linux OS.
The basic components of Linux are:
The most common Shells used in Linux are
Swap Space is the additional spaced used by Linux that temporarily holds concurrently running programs when the RAM does not have enough space to hold the programs. When you run a program, it resides on the RAM so that the processor can fetch data quickly. Suppose you are running more programs than the RAM can hold, then these running programs are stored in the Swap Space. The processor will now look for data in the RAM and the Swap Space.
Swap Space is used as an extension of RAM by Linux.
There are 3 main differences between BASH and DOS:
Sl. no. | BASH | DOS |
1. | Commands are case-sensitive. | Commands are not case-sensitive. |
2. | ‘/’ (forward slash) is used as a directory separator.
” (backslash) is used as an escape character. | ‘/’ (forward slash) is used as command argument delimiter.
” (backslash) is used as a directory separator. |
3. | Follows naming convention: 8 characters for file name postfixed with 3 characters for the extension. | No naming convention. |
Bash vs Dos – Linux Interview Questions
You can use any of the following commands:
free -m
vmstat
top
htop
There are 3 kinds of permission in Linux:
You can change the permission of a file or a directory using the chmod
command. There are two modes of using the chmod command:
The general syntax to change permission using Symbolic mode is as follows:
$ chmod <target>(+/-/=)<permission> <filename>
where <permissions>
can be r: read; w: write; x: execute.
<target>
can be u : user; g: group; o: other; a: all
'+'
is used for adding permission
'-'
is used for removing permission
'='
is used for setting the permission
For example, if you want to set the permission such that the user can read, write, and execute it and members of your group can read and execute it, and others may only read it.
Then the command for this will be:
$ chmod u=rwx,g=rx,o=r filename
The general syntax to change permission using Absolute mode is as follows:
$ chmod <permission> filename
The Absolute mode follows octal representation. The leftmost digit is for the user, the middle digit is for the user group and the rightmost digit is for all.
Below is the table that explains the meaning of the digits that can be used and their effect.
0 | No permission | – – – |
1 | Execute permission | – – x |
2 | Write permission | – w – |
3 | Execute and write permission: 1 (execute) + 2 (write) = 3 | – wx |
4 | Read permission | r – – |
5 | Read and execute permission: 4 (read) + 1 (execute) = 5 | r – x |
6 | Read and write permission: 4 (read) + 2 (write) = 6 | rw – |
7 | All permissions: 4 (read) + 2 (write) + 1 (execute) = 7 | rwx |
For example, if you want to set the permission such that the user can read, write, and execute it and members of your group can read and execute it, and others may only read it.
Then the command for this will be:
$ chmod 754 filename
inode is the unique name given by the operating system to each file. Similarly, process id is the unique id given to each process.
There are 5 main Directory Commands in Linux:
pwd: Displays the path of the present working directory.
Syntax: $ pwd
ls: Lists all the files and directories in the present working directory.
Syntax: $ ls
cd: Used to change the present working directory.
Syntax: $ cd <path to new directory>
mkdir: Creates a new directory
Syntax: $ mkdir <name (and path if required) of new directory>
rmdir: Deletes a directory
Syntax: $ rmdir <name (and path if required) of directory>
Virtual Desktop is a feature that allows users to use the desktop beyond the physical limits of the screen. Basically, Virtual Desktop creates a virtual screen to expand the limitation of the normal screen.
There are two ways Virtual Desktop can be implemented:
In the case of Switching Desktops, you can create discrete virtual desktops to run programs. Here, each virtual desktop will behave as an individual desktop and the programs running on each of these desktops is accessible only to the users who are using that particular desktop.
Oversized Desktops do not offer a discrete virtual desktop but it allows the user to pan and scroll around the desktop that is larger in size than the physical screen.
There are 3 modes of vi editor:
A daemon is a computer program that runs as a background process to provide functions that might not be available in the base Operating System. Daemons are usually used to run services in the background without directly being in control of interactive users. The purpose of Daemons are to handle periodic requests and then forward the requests to appropriate programs for execution.
The process states are as follows:
Grep stands for Global Regular Expression Print. The grep command is used to search for a text in a file by pattern matching based on regular expression.
Syntax: grep [options] pattern [files]
Example:
$ grep -c "linux" interview.txt
This command will print the count of the word “linux” in the “interview.txt” file.
The System Calls to manage the process are:
And the System Calls used to get Process ID are:
The ls command is used to list the files in a specified directory. The general syntax is:
$ ls <options> <directory>
For example, if you want to list all the files in the Example directory, then the command will be as follows:
$ ls Example/
There are different options that can be used with the ls command. These options give additional information about the file/ folder. For example:
-l | lists long format (shows the permissions of the file) |
-a | lists all files including hidden files |
-i | lists files with their inode number |
-s | lists files with their size |
-S | lists files with their size and sorts the list by file size |
-t | sorts the listed files by time and date |
The redirection operator is used to redirect the output of a particular command as an input to another command or file.
There are two ways of using this:
‘>’ overwrites the existing content of the file or creates a new file.
‘>>’ appends the new content to the end of the file or creates a new file.
Suppose the content of the file is as follows:
Now when you use the ‘>’ redirection operator, the contents of the file are overwritten.
and when you use ‘>>’, the contents are appended:
The tar command is used to extract or create an archived file.
Suppose you want to extract all the files from the archive named sample.tar.gz, then the command will be:
$ tar -xvzf sample.tar.gz
Suppose you want to create an archive of all the files stored in the path /home/linux/, then the command will be:
$ tar -cvzf filename.tar.gz
where c: create archive, x: extract, v: verbose, f: file
A Latch is a temporary storage device controlled by timing signal which can either store 0 or 1. A Latch has two stable states (high-output or 1, and low-output or 0) and is mainly used to store state information. A Latch can store one bit of data as long as it is powered on.
A Microprocessor is a device that executes instructions. It is a single-chip device that fetches the instruction from the memory, decodes it and executes it. A Microprocessor can carry out 3 basic functions:
Regular Expressions are used to search for data having a particular pattern. Some of the commands used with Regular Patterns are: tr, sed, vi and grep.
Some of the common symbols used in Regular Expressions are:
. | Match any character |
^ | Match the beginning of the String |
$ | Match the end of the String |
* | Match zero or more characters |
Represents special characters | |
? | Match exactly one character |
Suppose the content of a file is as follows:
If you want to list the entries that start with the character ‘a’, then the command would be:
$ cat linux.txt | grep ^a
If you want to list the entries that start has the character ‘n’, then the command would be:
$ cat linux.txt | grep n
The minimum number of partitions required is 2.
One partition is used as the local file system where all the files are stored. This includes files of the OS, files of applications and services, and files of the user. And the other partition is used as Swap Space which acts as an extended memory for RAM.
Interviewers will ask scenario based questions along with theoretical questions to check how much hands-on knowledge you have. In this part of Linux Interview Questions, we will discuss such questions.
You can use the cp command to copy a file in Linux. The general syntax is:
$ cp <source> <destination>
Suppose you want to copy a file named questions.txt from the directory /new/linux to /linux/interview, then the command will be:
$ cp questions.txt /new/linux /linux/interview
Every process has a unique process id. To terminate the process, we first need to find the process id. The ps
command will list all the running processes along with the process id. And then we use the kill
command to terminate the process.
The command for listing down all the processes:
$ ps
Suppose the process id of the process you want to terminate is 3849, then you will have to terminate it like this:
$ kill 3849
There is no specific command to rename a file in Linux. But you use the copy or move command to rename the file.
Using the Move command
$ mv <oldname> <newname>
Using the Copy command
$ cp <oldname> <newname>
And then delete the old file.
$ rm <oldname>
You can use the redirection operator (>) to do this.
Syntax: $ (command) > (filename)
By running the following command:
$ mount -l
You can use the locate command to find the path to the file.
Suppose you want to find the locations of a file name sample.txt, then your command would be:
$ locate sample.txt
You can use the diff command for this:
$ diff abc.conf xyz.conf
for i in *linux*; do rm $i; done
The touch command can be used to create a text file without opening it. The touch command will create an empty file. The syntax is as follows:
$ touch <filename>
Suppose you want to create a file named sample.txt, then the command would be:
$ touch sample.txt
There are two commands that can be used to delete a directory in Linux.
$ rmdir <directory name>
$ rm -rf <directory name>
Note: The command rm -rf should be used carefully because it will delete all the data without any warnings.
There are two commands to schedule tasks in Linux: cron and at.
The cron command is used to repeatedly schedule a task at a specific time. The tasks are stored in a cron file and then executed using the cron command. The cron command reads the string from this file and schedules the task. The syntax for the string to enter in the cron file is as follows:
<minute> <hour> <day> <month> <weekday> <command>
Suppose you want to run a command at 4 pm every Sunday, then the string would be:
0 16 * * 0 <command>
The at command is used to schedule a task only once at the specified time.
Suppose you want to shut down the system at 6 pm today, then the command for this would be:
$ echo "shutdown now" | at -m 18:00
The .z extension means that the file has been compressed. To look at the contents of the compressed file, you can use the zcat command. Example:
$ zcat sample.z
Follow these steps to copy files to a Floppy Disk safely:
If you don’t unmount the floppy disk, then the data might become corrupted.
Open the terminal and run:
$ echo $SHELL
This will print the name of the Shell being used.
SSH can be used for this. The Syntax is as follows:
ssh <username>@<ip address>
Suppose you want to login into a system with IP address 192.168.5.5 as a user “mike”, then the command would be:
$ ssh mike@192.168.5.5
$ vim -R <filename>
$ vim +/<employee id to be searched> <filename>
$ vim +<line number> <filename>
This can be done using the sort command.
$ sort sample.txt
The export command is used to set and reload the environment variables. For example, if you want to set the Java path, then the command would be:
$ export JAVA_HOME = /home/user/Java/bin
$ service <servicename> status
$ service --status-all
To start:
$ service <servicename> start
To stop:
$ service <servicename> start
This command is used to display the free, used, swap memory available in the system.
Typical free command output. The output is displayed in bytes.
$ free
I hope these Linux Interview Questions will help you perform well in your interview. And I wish you all the best!
Original article source at https://www.edureka.co
#linux #interviewquestion
1654485990
In this article, we will see how hashmap’s get and put method works internally. What operations are performed. How the hashing is done. How the value is fetched by key. How the key-value pair is stored.
Now we will see how this works. First we will see the hashing process.
Hashing
Hashing is a process of converting an object into integer form by using the method hashCode(). Its necessary to write hashCode() method properly for better performance of HashMap. Here I am taking key of my own class so that I can override hashCode() method to show different scenarios. My Key class is
//custom Key class to override hashCode()
// and equals() method
class Key
{
String key;
Key(String key)
{
this.key = key;
}
@Override
public int hashCode()
{
return (int)key.charAt(0);
}
@Override
public boolean equals(Object obj)
{
return key.equals((String)obj);
}
}
Here overrided hashCode() method returns the first character’s ASCII value as hash code. So whenever the first character of key is same, the hash code will be same. You should not approach this criteria in your program. It is just for demo purpose. As HashMap also allows null key, so hash code of null will always be 0.
hashCode() method
hashCode() method is used to get the hash Code of an object. hashCode() method of object class returns the memory reference of object in integer form. Definition of hashCode() method is public native hashCode(). It indicates the implementation of hashCode() is native because there is not any direct method in java to fetch the reference of object. It is possible to provide your own implementation of hashCode().
In HashMap, hashCode() is used to calculate the bucket and therefore calculate the index.
equals() method
equals method is used to check that 2 objects are equal or not. This method is provided by Object class. You can override this in your class to provide your own implementation.
HashMap uses equals() to compare the key whether they are equal or not. If equals() method return true, they are equal otherwise not equal.
Buckets
A bucket is one element of HashMap array. It is used to store nodes. Two or more nodes can have the same bucket. In that case link list structure is used to connect the nodes. Buckets are different in capacity. A relation between bucket and capacity is as follows:
capacity = number of buckets * load factor
A single bucket can have more than one nodes, it depends on hashCode() method. The better your hashCode() method is, the better your buckets will be utilized.
Index Calculation in Hashmap
Hash code of key may be large enough to create an array. hash code generated may be in the range of integer and if we create arrays for such a range, then it will easily cause outOfMemoryException. So we generate index to minimize the size of array. Basically following operation is performed to calculate index.
index = hashCode(key) & (n-1).
where n is number of buckets or the size of array. In our example, I will consider n as default size that is 16.
Why the above method is used to calculate the index
Using a bitwise AND operator is similar to doing bit masking wherein only the lower bits of the hash integer are considered which in turn provides a very efficient method of calculating the modulus based on the length of the hashmap.
HashMap map = new HashMap();
map.put(new Key("vishal"), 20);
{
int hash = 118
// {"vishal"} is not a string but
// an object of class Key
Key key = {"vishal"}
Integer value = 20
Node next = null
}
Place this object at index 6, if no other object is presented there.
map.put(new Key("sachin"), 30);
{
int hash = 115
Key key = {"sachin"}
Integer value = 30
Node next = null
}
map.put(new Key("vaibhav"), 40);
{
int hash = 118
Key key = {"vaibhav"}
Integer value = 40
Node next = null
}
Using get method()
Now lets try some get method to get a value. get(K key) method is used to get a value by its key. If you don’t know the key then it is not possible to fetch a value.
map.get(new Key("sachin"));
map.get(new Key("vaibhav"));
It is the process of converting an object into an integer value. The integer value helps in indexing and faster searches.
HashMap is a part of the Java collection framework. It uses a technique called Hashing. It implements the map interface. It stores the data in the pair of Key and Value. HashMap contains an array of the nodes, and the node is represented as a class. It uses an array and LinkedList data structure internally for storing Key and Value. There are four fields in HashMap.
Before understanding the internal working of HashMap, you must be aware of hashCode() and equals() method.
We use put() method to insert the Key and Value pair in the HashMap. The default size of HashMap is 16 (0 to 15).
In the following example, we want to insert three (Key, Value) pair in the HashMap.
HashMap<String, Integer> map = new HashMap<>();
map.put("Aman", 19);
map.put("Sunny", 29);
map.put("Ritesh", 39);
Let's see at which index the Key, value pair will be saved into HashMap. When we call the put() method, then it calculates the hash code of the Key "Aman." Suppose the hash code of "Aman" is 2657860. To store the Key in memory, we have to calculate the index.
Index minimizes the size of the array. The Formula for calculating the index is:
Index = hashcode(Key) & (n-1)
Where n is the size of the array. Hence the index value for "Aman" is:
Index = 2657860 & (16-1) = 4
The value 4 is the computed index value where the Key and value will store in HashMap.
This is the case when the calculated index value is the same for two or more Keys. Let's calculate the hash code for another Key "Sunny." Suppose the hash code for "Sunny" is 63281940. To store the Key in the memory, we have to calculate index by using the index formula.
Index=63281940 & (16-1) = 4
The value 4 is the computed index value where the Key will be stored in HashMap. In this case, equals() method check that both Keys are equal or not. If Keys are same, replace the value with the current value. Otherwise, connect this node object to the existing node object through the LinkedList. Hence both Keys will be stored at index 4.
Similarly, we will store the Key "Ritesh." Suppose hash code for the Key is 2349873. The index value will be 1. Hence this Key will be stored at index 1.
get() method is used to get the value by its Key. It will not fetch the value if you don't know the Key. When get(K Key) method is called, it calculates the hash code of the Key.
Suppose we have to fetch the Key "Aman." The following method will be called.
map.get(new Key("Aman"));
It generates the hash code as 2657860. Now calculate the index value of 2657860 by using index formula. The index value will be 4, as we have calculated above. get() method search for the index value 4. It compares the first element Key with the given Key. If both keys are equal, then it returns the value else check for the next element in the node if it exists. In our scenario, it is found as the first element of the node and return the value 19.
Let's fetch another Key "Sunny."
The hash code of the Key "Sunny" is 63281940. The calculated index value of 63281940 is 4, as we have calculated for put() method. Go to index 4 of the array and compare the first element's Key with the given Key. It also compares Keys. In our scenario, the given Key is the second element, and the next of the node is null. It compares the second element Key with the specified Key and returns the value 29. It returns null if the next of the node is null.
#java #interviewquestion #programming
1654143327
123-JavaScript-Interview-Questions
This book's goal is to help javascript frontend developers prepare for technical job interviews through a collection of carefully compiled questions.
undefined
and not defined
in JavaScriptAnswer
In JavaScript if you try to use a variable that doesn't exist and has not been declared, then JavaScript will throw an error var name is not defined
and the script will stop executing thereafter. But If you use typeof undeclared_variable
then it will return undefined
.
Before starting further discussion let's understand the difference between declaration and definition.
var x
is a declaration because we are not defining what value it holds yet, but we are declaring its existence and the need for memory allocation.
var x; // declaring x
console.log(x); // output: undefined
var x = 1
is both declaration and definition, here declaration and assignment of value happen inline for variable x—what we are doing is called "initialisation". In JavaScript both variable declarations and function declarations go to the top of the scope in which they are declared, then assignment happens—this series of events is called "hoisting".
A variable can be declared but not defined. When we try to access it, It will result undefined
.
var x; // Declaration
typeof x === 'undefined'; // Will return true
A variable can be neither declared nor defined. When we try to reference such variable then the result will be not defined
.
console.log(y); // Output: ReferenceError: y is not defined
http://stackoverflow.com/questions/20822022/javascript-variable-definition-declaration
x
the results of the following statements are not the same?if( x <= 100 ) {...}
if( !(x > 100) ) {...}
Answer
NaN <= 100
is false
and NaN > 100
is also false
, so if the value of x
is NaN
, the statements are not the same.
The same holds true for any value of x that being converted to type Number, returns NaN
, e.g.: undefined
, [1,2,5]
, {a:22}
, etc.
This is why you need to pay attention when you deal with numeric variables. NaN
can’t be equal, less than or more than any other numeric value, so the only reliable way to check if the value is NaN
, is to use the isNaN()
function.
Answer
One of the drawbacks of declaring methods directly in JavaScript objects is that they are very memory inefficient. When you do that, a new copy of the method is created for each instance of an object. Here's an example:
var Employee = function (name, company, salary) {
this.name = name || "";
this.company = company || "";
this.salary = salary || 5000;
// We can create a method like this:
this.formatSalary = function () {
return "$ " + this.salary;
};
};
// Alternatively we can add the method to Employee's prototype:
Employee.prototype.formatSalary2 = function() {
return "$ " + this.salary;
}
//creating objects
var emp1 = new Employee('Yuri Garagin', 'Company 1', 1000000);
var emp2 = new Employee('Dinesh Gupta', 'Company 2', 1039999);
var emp3 = new Employee('Erich Fromm', 'Company 3', 1299483);
In this case each instance variable emp1
, emp2
, emp3
has its own copy of theformatSalary
method. However the formatSalary2
will only be added once to Employee.prototype
.
Answer
A closure is a function defined inside another function (called parent function) and as such it has access to the variables declared and defined within its parent function's scope.
The closure has access to the variables in three scopes:
var globalVar = "abc"; //Global variable
// Parent self-invoking function
(function outerFunction (outerArg) { // start of outerFunction's scope
var outerFuncVar = 'x'; // Variable declared in outerFunction's function scope
// Closure self-invoking function
(function innerFunction (innerArg) { // start of innerFunction's scope
var innerFuncVar = "y"; // variable declared in innerFunction's function scope
console.log(
"outerArg = " + outerArg + "\n" +
"outerFuncVar = " + outerFuncVar + "\n" +
"innerArg = " + innerArg + "\n" +
"innerFuncVar = " + innerFuncVar + "\n" +
"globalVar = " + globalVar);
// end of innerFunction's scope
})(5); // Pass 5 as parameter to our Closure
// end of outerFunction's scope
})(7); // Pass 7 as parameter to the Parent function
innerFunction
is a closure which is defined inside outerFunction
and consequently has access to all the variables which have been declared and defined within outerFunction
's scope as well as any variables residing in the program's global scope.
The output of the code above would be:
outerArg = 7
outerFuncVar = x
innerArg = 5
innerFuncVar = y
globalVar = abc
console.log(mul(2)(3)(4)); // output : 24
console.log(mul(4)(3)(4)); // output : 48
Answer
function mul (x) {
return function (y) { // anonymous function
return function (z) { // anonymous function
return x * y * z;
};
};
}
Here the mul
function accepts the first argument and returns an anonymous function which then takes the second parameter and returns one last anonymous function which finally takes the third and final parameter; the last function then multiplies x
, y
and z
, and returns the result of the operation.
In Javascript, a function defined inside another function has access to the outer function's scope and can consequently return, interact with or pass on to other functions, the variables belonging to the scopes that incapsulate it.
For instance:
var arrayList = ['a', 'b', 'c', 'd', 'e', 'f'];
How can we empty the array above?
Answer
There are a couple of ways by which we can empty an array, So let's discuss all the possible way by which we can empty an array.
arrayList = [];
The code above will set the variable arrayList
to a new empty array. This is recommended if you don't have references to the original array arrayList
anywhere else because It will actually create a new empty array. You should be careful with this way of empty the array, because if you have referenced this array from another variable, then the original reference array will remain unchanged, Only use this way if you have only referenced the array by its original variable arrayList
.
For instance:
var arrayList = ['a', 'b', 'c', 'd', 'e', 'f']; // Created array
var anotherArrayList = arrayList; // Referenced arrayList by another variable
arrayList = []; // Empty the array
console.log(anotherArrayList); // Output ['a', 'b', 'c', 'd', 'e', 'f']
arrayList.length = 0;
The code above will clear the existing array by setting its length to 0. This way of emptying an array will also update all the reference variables that point to the original array.
For instance:
var arrayList = ['a', 'b', 'c', 'd', 'e', 'f']; // Created array
var anotherArrayList = arrayList; // Referenced arrayList by another variable
arrayList.length = 0; // Empty the array by setting length to 0
console.log(anotherArrayList); // Output []
arrayList.splice(0, arrayList.length);
Above implementation will also work perfectly. This way of empty the array will also update all the references of the original array.
var arrayList = ['a', 'b', 'c', 'd', 'e', 'f']; // Created array
var anotherArrayList = arrayList; // Referenced arrayList by another variable
arrayList.splice(0, arrayList.length); // Empty the array by setting length to 0
console.log(anotherArrayList); // Output []
while(arrayList.length) {
arrayList.pop();
}
Above implementation can also empty the array. But not recommended to use often.
Answer
The best way to find whether an object is instance of a particular class or not using toString
method from Object.prototype
var arrayList = [1 , 2, 3];
One of the best use cases of type checking of an object is when we do method overloading in JavaScript. To understand this, let's say we have a method called greet
which can take a single string and also a list of strings. To make our greet
method workable in both situation we need to know what kind of parameter is being passed: is it single value or list of values?
function greet(param) {
if() {
// here have to check whether param is array or not
}
else {
}
}
However, in the above implementation it might not necessary to check the type of the array, we can check for single value string and put array logic code in else block, let see below code for the same.
function greet(param) {
if(typeof param === 'string') {
}
else {
// If param is of type array then this block of code would execute
}
}
Now it's fine we can go with the previous two implementations, but when we have a situation like a parameter can be single value
, array
, and object
type then we will be in trouble.
Coming back to checking the type of an object, As we mentioned that we can use Object.prototype.toString
if(Object.prototype.toString.call(arrayList) === '[object Array]') {
console.log('Array!');
}
If you are using jQuery
then you can also used jQuery isArray
method:
if($.isArray(arrayList)) {
console.log('Array');
} else {
console.log('Not an array');
}
FYI jQuery uses Object.prototype.toString.call
internally to check whether an object is an array or not.
In modern browser, you can also use:
Array.isArray(arrayList);
Array.isArray
is supported by Chrome 5, Firefox 4.0, IE 9, Opera 10.5 and Safari 5
var output = (function(x) {
delete x;
return x;
})(0);
console.log(output);
Answer
The code above will output 0
as output. delete
operator is used to delete a property from an object. Here x
is not an object, it's a local variable. delete
operator doesn't affect local variables.
var x = 1;
var output = (function() {
delete x;
return x;
})();
console.log(output);
Answer
The code above will output 1
as output. delete
operator is used to delete a property from an object. Here x
is not an object it's global variable of type number
.
var x = { foo : 1};
var output = (function() {
delete x.foo;
return x.foo;
})();
console.log(output);
Answer
The code above will output undefined
as output. delete
operator is used to delete a property from an object. Here x
is an object which has foo as a property and from a self-invoking function, we are deleting the foo
property of object x
and after deletion, we are trying to reference deleted property foo
which result undefined
.
var Employee = {
company: 'xyz'
}
var emp1 = Object.create(Employee);
delete emp1.company
console.log(emp1.company);
AnswerThe code above will output `xyz` as output. Here `emp1` object got company as **prototype** property. delete operator doesn't delete prototype property.
emp1
object doesn't have company as its own property. you can test it console.log(emp1.hasOwnProperty('company')); //output : false
However, we can delete company property directly from Employee
object using delete Employee.company
or we can also delete from emp1
object using __proto__
property delete emp1.__proto__.company
.
undefined x 1
in JavaScriptvar trees = ["redwood", "bay", "cedar", "oak", "maple"];
delete trees[3];
Answer- When you run the code above and do `console.log(trees);` in chrome developer console then you will get `["redwood", "bay", "cedar", undefined × 1, "maple"]`. - In the recent versions of Chrome you will see the word `empty` of `undefined x 1`. - When you run the same code in Firefox browser console then you will get `["redwood", "bay", "cedar", undefined, "maple"]`
Clearly we can see that Chrome has its own way of displaying uninitialized index in arrays. However when you check trees[3] === undefined
in any browser you will get similar output as true
.
Note: Please remember that you need not check for the uninitialized index of the array in trees[3] === 'undefined × 1'
it will give an error because 'undefined × 1'
this is just way of displaying an uninitialized index of an array in chrome.
var trees = ["xyz", "xxxx", "test", "ryan", "apple"];
delete trees[3];
console.log(trees.length);
AnswerThe code above will output `5` as output. When we used `delete` operator for deleting an array element then, the array length is not affected by this. This holds even if you deleted all elements of an array using `delete` operator.
So when delete operator removes an array element that deleted element is no longer present in the array. In place of value at deleted index undefined x 1
in chrome and undefined
is placed at the index. If you do console.log(trees)
output ["xyz", "xxxx", "test", undefined × 1, "apple"]
in Chrome and in Firefox ["xyz", "xxxx", "test", undefined, "apple"]
.
var bar = true;
console.log(bar + 0);
console.log(bar + "xyz");
console.log(bar + true);
console.log(bar + false);
Answer
The code above will output 1, "truexyz", 2, 1
as output. Here's a general guideline for the plus operator:
var z = 1, y = z = typeof y;
console.log(y);
Answer
The code above will print string "undefined"
as output. According to associativity rule operator with the same precedence are processed based on their associativity property of operator. Here associativity of the assignment operator is Right to Left
so first typeof y
will evaluate first which is string "undefined"
and assigned to z
and then y
would be assigned the value of z. The overall sequence will look like that:
var z;
z = 1;
var y;
z = typeof y;
y = z;
// NFE (Named Function Expression)
var foo = function bar() { return 12; };
typeof bar();
Answer
The output will be Reference Error
. To fix the bug we can try to rewrite the code a little bit:
Sample 1
var bar = function() { return 12; };
typeof bar();
or
Sample 2
function bar() { return 12; };
typeof bar();
The function definition can have only one reference variable as a function name, In sample 1 bar
is reference variable which is pointing to anonymous function
and in sample 2 we have function statement and bar
is the function name.
var foo = function bar() {
// foo is visible here
// bar is visible here
console.log(typeof bar()); // Works here :)
};
// foo is visible here
// bar is undefined here
var foo = function() {
// Some code
}
function bar () {
// Some code
}
Answer
The main difference is that function foo
is defined at run-time
and is called a function expression, whereas function bar
is defined at parse time
and is called a function statement. To understand it better, let's take a look at the code below :
// Run-Time function declaration
foo(); // Call foo function here, It will give an error
var foo = function() {
console.log("Hi I am inside Foo");
};
// Parse-Time function declaration
bar(); // Call bar function here, It will not give an Error
function bar() {
console.log("Hi I am inside Foo");
}
bar();
(function abc(){console.log('something')})();
function bar(){console.log('bar got called')};
Answer
The output will be :
bar got called
something
Since the function is called first and defined during parse time the JS engine will try to find any possible parse time definitions and start the execution loop which will mean function is called first even if the definition is post another function.
Answer
Let's take the following function expression
var foo = function foo() {
return 12;
}
In JavaScript var
-declared variables and functions are hoisted
. Let's take function hoisting
first. Basically, the JavaScript interpreter looks ahead to find all the variable declaration and hoists them to the top of the function where it's declared. For example:
foo(); // Here foo is still undefined
var foo = function foo() {
return 12;
};
The code above behind the scene look something like this:
var foo = undefined;
foo(); // Here foo is undefined
foo = function foo() {
// Some code stuff
}
var foo = undefined;
foo = function foo() {
// Some code stuff
}
foo(); // Now foo is defined here
var salary = "1000$";
(function () {
console.log("Original salary was " + salary);
var salary = "5000$";
console.log("My New Salary " + salary);
})();
Answer
The code above will output: undefined, 5000$
because of hoisting. In the code presented above, you might be expecting salary
to retain it values from outer scope until the point that salary
was re-declared in the inner scope. But due to hoisting
salary value was undefined
instead. To understand it better have a look of the following code, here salary
variable is hoisted and declared at the top in function scope. When we print its value using console.log
the result is undefined
. Afterwards the variable is redeclared and the new value "5000$"
is assigned to it.
var salary = "1000$";
(function () {
var salary = undefined;
console.log("Original salary was " + salary);
salary = "5000$";
console.log("My New Salary " + salary);
})();
typeof
and instanceof
?Answer
typeof
is an operator that returns a string with the type of whatever you pass.
The typeof
operator checks if a value belongs to one of the seven basic types: number
, string
, boolean
, object
, function
, undefined
or Symbol
.
typeof(null)
will return object
.
instanceof
is much more intelligent: it works on the level of prototypes. In particular, it tests to see if the right operand appears anywhere in the prototype chain of the left. instanceof
doesn’t work with primitive types. The instanceof
operator checks the current object and returns true if the object is of the specified type, for example:
var dog = new Animal();
dog instanceof Animal; // Output : true
Here dog instanceof Animal
is true since dog
inherits from Animal.prototype
var name = new String("xyz");
name instanceof String; // Output : true
Ref Link: http://stackoverflow.com/questions/2449254/what-is-the-instanceof-operator-in-javascript
var counterArray = {
A : 3,
B : 4
};
counterArray["C"] = 1;
Answer
First of all, in the case of JavaScript an associative array is the same as an object. Secondly, even though there is no built-in function or property available to calculate the length/size an object, we can write such function ourselves.
Object
has keys
method which can be used to calculate the length of object.
Object.keys(counterArray).length; // Output 3
We can also calculate the length of object by iterating through the object and by doing a count of own property of object. This way we will ignoge the properties that came from the object's prototype chain:
function getLength(object) {
var count = 0;
for(key in object) {
// hasOwnProperty method check own property of object
if(object.hasOwnProperty(key)) count++;
}
return count;
}
All modern browsers (including IE9+) support the getOwnPropertyNames
method, so we can calculate the length using the following code:
Object.getOwnPropertyNames(counterArray).length; // Output 3
Underscore and lodash libraries have the method size
dedicated to calculate the object length. We don't recommend to include one of these libraries just to use the size
method, but if it's already used in your project - why not?
_.size({one: 1, two: 2, three: 3});
=> 3
Function
, Method
and Constructor
calls in JavaScript.Answer
If your are familiar with Object-oriented programming, More likely familiar to thinking of functions, methods, and class constructors as three separate things. But In JavaScript, these are just three different usage patterns of one single construct.
functions : The simplest usages of function call:
function helloWorld(name) {
return "hello world, " + name;
}
helloWorld("JS Geeks"); // "hello world JS Geeks"
Methods in JavaScript are nothing more than object properties that are functions.
var obj = {
helloWorld : function() {
return "hello world, " + this.name;
},
name: 'John Carter'
}
obj.helloWorld(); // // "hello world John Carter"
Notice how helloWorld
refer to this
properties of obj. Here it's clear or you might have already understood that this
gets bound to obj
. But the interesting point that we can copy a reference to the same function helloWorld
in another object and get a difference answer. Let see:
var obj2 = {
helloWorld : obj.helloWorld,
name: 'John Doe'
}
obj2.helloWorld(); // "hello world John Doe"
You might be wonder what exactly happens in a method call here. Here we call the expression itself determine the binding of this this
, The expression obj2.helloWorld()
looks up the helloWorld
property of obj and calls it with receiver object obj2
.
The third use of functions is as constructors. Like function and method, constructors
are defined with function.
function Employee(name, age) {
this.name = name;
this.age = age;
}
var emp1 = new Employee('John Doe', 28);
emp1.name; // "John Doe"
emp1.age; // 28
Unlike function calls and method calls, a constructor call new Employee('John Doe', 28)
creates a brand new object and passes it as the value of this
, and implicitly returns the new object as its result.
The primary role of the constructor function is to initialize the object.
function User(name) {
this.name = name || "JsGeeks";
}
var person = new User("xyz")["location"] = "USA";
console.log(person);
Answer
The output of above code would be "USA"
. Here new User("xyz")
creates a brand new object and created property location
on that and USA
has been assigned to object property location and that has been referenced by the person.
Let say new User("xyz")
created a object called foo
. The value "USA"
will be assigned to foo["location"]
, but according to ECMAScript Specification , pt 12.14.4 the assignment will itself return the rightmost value: in our case it's "USA"
. Then it will be assigned to person.
To better understand what's going on here, try to execute this code in console, line by line:
function User(name) {
this.name = name || "JsGeeks";
}
var person;
var foo = new User("xyz");
foo["location"] = "USA";
// the console will show you that the result of this is "USA"
Answer
It’s a technology that allows your web application to use cached resources first, and provide default experience offline, before getting more data from the network later. This principle is commonly known as Offline First.
Service Workers actively use promises. A Service Worker has to be installed,activated and then it can react on fetch, push and sync events.
As of 2017, Service Workers are not supported in IE and Safari.
Answer
In JS, that difference is quite subtle. A function is a piece of code that is called by name and function itself not associated with any object and not defined inside any object. It can be passed data to operate on (i.e. parameter) and can optionally return data (the return value).
// Function statement
function myFunc() {
// Do some stuff;
}
// Calling the function
myFunc();
Here myFunc() function call is not associated with object hence not invoked through any object.
A function can take a form of immediately invoked function expression (IIFE):
// Anonymous Self-invoking Function
(function() {
// Do some stuff;
})();
Finally there are also arrow functions:
const myFunc = arg => {
console.log("hello", arg)
}
A method is a piece of code that is called by its name and that is associated with the object. Methods are functions. When you call a method like this obj1.myMethod()
, the reference to obj1
gets assigned (bound) to this
variable. In other words, the value of this
will be obj1
inside myMethod
.
Here are some examples of methods:
Example 1
var obj1 = {
attribute: "xyz",
myMethod: function () { // Method
console.log(this.attribute);
}
};
// Call the method
obj1.myMethod();
Here obj1
is an object and myMethod
is a method which is associated with obj1
.
Example 2
In ES6 we have classes. There the methods will look like this:
class MyAwesomeClass {
myMethod() {
console.log("hi there");
}
}
const obj1 = new MyAwesomeClass();
obj1.myMethod();
Understand: the method is not some kind of special type of a function, and it's not about how you declare a function. It's the way we call a function. Look at that:
var obj1 = {
prop1: "buddy"
};
var myFunc = function () {
console.log("Hi there", this);
};
// let's call myFunc as a function:
myFunc(); // will output "Hi there undefined" or "Hi there Window"
obj1.myMethod = myFunc;
//now we're calling myFunc as a method of obj1, so this will point to obj1
obj1.myMethod(); // will print "Hi there" following with obj1.
Answer
IIFE a function that runs as soon as it's defined. Usually it's anonymous (doesn't have a function name), but it also can be named. Here's an example of IIFE:
(function() {
console.log("Hi, I'm IIFE!");
})();
// outputs "Hi, I'm IIFE!"
So, here's how it works. Remember the difference between function statements (function a () {}
) and function expressions (var a = function() {}
)? So, IIFE is a function expression. To make it an expression we surround our function declaration into the parens. We do it to explicitly tell the parser that it's an expression, not a statement (JS doesn't allow statements in parens).
After the function you can see the two ()
braces, this is how we run the function we just declared.
That's it. The rest is details.
The function inside IIFE doesn't have to be anonymous. This one will work perfectly fine and will help to detect your function in a stacktrace during debugging:
(function myIIFEFunc() {
console.log("Hi, I'm IIFE!");
})();
// outputs "Hi, I'm IIFE!"
It can take some parameters:
Here there value "Yuri"
is passed to the param1
of the function.
(function myIIFEFunc(param1) {
console.log("Hi, I'm IIFE, " + param1);
})("Yuri");
// outputs "Hi, I'm IIFE, Yuri!"
It can return a value:
var result = (function myIIFEFunc(param1) {
console.log("Hi, I'm IIFE, " + param1);
return 1;
})("Yuri");
// outputs "Hi, I'm IIFE, Yuri!"
// result variable will contain 1
You don't have to surround the function declaration into parens, although it's the most common way to define IIFE. Instead you can use any of the following forms:
~function(){console.log("hi I'm IIFE")}()
!function(){console.log("hi I'm IIFE")}()
+function(){console.log("hi I'm IIFE")}()
-function(){console.log("hi I'm IIFE")}()
(function(){console.log("hi I'm IIFE")}());
var i = function(){console.log("hi I'm IIFE")}();
true && function(){ console.log("hi I'm IIFE") }();
0, function(){ console.log("hi I'm IIFE") }();
new function(){ console.log("hi I'm IIFE") }
new function(){ console.log("hi I'm IIFE") }()
Variables and functions that you declare inside an IIFE are not visible to the outside world, so you can:
Answer
The singleton pattern is an often used JavaScript design pattern. It provides a way to wrap the code into a logical unit that can be accessed through a single variable. The Singleton design pattern is used when only one instance of an object is needed throughout the lifetime of an application. In JavaScript, Singleton pattern have many uses, they can be used for NameSpacing, which reduce the number of global variables in your page (prevent from polluting global space), organizing the code in a consistent manner, which increase the readability and maintainability of your pages.
There are two important points in the traditional definition of Singleton pattern:
Let me define singleton pattern in JavaScript context:
It is an object that is used to create namespace and group together a related set of methods and attributes (encapsulation) and if we allow to initiate then it can be initiated only once.
In JavaScript, we can create singleton though object literal. However, there is some another way but that I will cover in next post.
A singleton object consists of two parts: The object itself, containing the members (Both methods and attributes) within it, and global variable used to access it. The variable is global so that object can be accessed anywhere in the page, this is a key feature of the singleton pattern.
JavaScript: A Singleton as a Namespace
As I have already stated above that singleton can be used to declare Namespace in JavaScript. NameSpacing is a large part of responsible programming in JavaScript. Because everything can be overwritten, and it is very easy to wipe out variable by mistake or a function, or even a class without even knowing it. A common example which happens frequently when you are working with another team member parallel,
function findUserName(id) {
}
/* Later in the page another programmer
added code */
var findUserName = $('#user_list');
/* You are trying to call :( */
console.log(findUserName())
One of the best ways to prevent accidentally overwriting variable is to namespace your code within a singleton object.
/* Using Namespace */
var MyNameSpace = {
findUserName : function(id) {},
// Other methods and attribute go here as well
}
/* Later in the page another programmer
added code */
var findUserName = $('#user_list');
/* You are trying to call and you make this time workable */
console.log(MyNameSpace.findUserName());
/* Lazy Instantiation skeleton for a singleton pattern */
var MyNameSpace = {};
MyNameSpace.Singleton = (function() {
// Private attribute that holds the single instance
var singletonInstance;
// All of the normal code goes here
function constructor() {
// Private members
var privateVar1 = "Nishant";
var privateVar2 = [1,2,3,4,5];
function privateMethod1() {
// code stuff
}
function privateMethod1() {
// code stuff
}
return {
attribute1 : "Nishant",
publicMethod: function() {
alert("Nishant");// some code logic
}
}
}
return {
// public method (Global access point to Singleton object)
getInstance: function() {
//instance already exist then return
if(!singletonInstance) {
singletonInstance = constructor();
}
return singletonInstance;
}
}
})();
// getting access of publicMethod
console.log(MyNamespace.Singleton.getInstance().publicMethod());
The singleton implemented above is easy to understand. The singleton class maintains a static reference to the lone singleton instance and return that reference from the static getInstance() method.
Answer
This method is useful if we want to create several similar objects. In the code sample below, we wrote the function Employee
and used it as a constructor by calling it with the new
operator.
function Employee(fName, lName, age, salary){
this.firstName = fName;
this.lastName = lName;
this.age = age;
this.salary = salary;
}
// Creating multiple object which have similar property but diff value assigned to object property.
var employee1 = new Employee('John', 'Moto', 24, '5000$');
var employee2 = new Employee('Ryan', 'Jor', 26, '3000$');
var employee3 = new Employee('Andre', 'Salt', 26, '4000$');
Object Literal is best way to create an object and this is used frequently. Below is code sample for create employee object which contains property as well as method.
var employee = {
name : 'Nishant',
salary : 245678,
getName : function(){
return this.name;
}
}
The code sample below is Nested Object Literal, Here address is an object inside employee object.
var employee = {
name : 'Nishant',
salary : 245678,
address : {
addressLine1 : 'BITS Pilani',
addressLine2 : 'Vidya Vihar'.
phoneNumber: {
workPhone: 7098889765,
homePhone: 1234567898
}
}
}
Object
using new
keywordIn the code below, a sample object has been created using Object
's constructor function.
var employee = new Object(); // Created employee object using new keywords and Object()
employee.name = 'Nishant';
employee.getName = function(){
return this.name;
}
Object.create
Object.create(obj)
will create a new object and set the obj
as its prototype. It’s a modern way to create objects that inherit properties from other objects. Object.create
function doesn’t run the constructor. You can use Object.create(null)
when you don’t want your object to inherit the properties of Object
.
var newObject = deepClone(obj);
Answer
function deepClone(object){
var newObject = {};
for(var key in object){
if(typeof object[key] === 'object' && object[key] !== null ){
newObject[key] = deepClone(object[key]);
}else{
newObject[key] = object[key];
}
}
return newObject;
}
Explanation: We have been asked to do deep copy of object so What's basically it's mean ??. Let's understand in this way you have been given an object personalDetail
this object contains some property which again a type of object here as you can see address
is an object and phoneNumber
in side an address
is also an object. In simple term personalDetail
is nested object(object inside object). So Here deep copy means we have to copy all the property of personalDetail
object including nested object.
var personalDetail = {
name : 'Nishant',
address : {
location: 'xyz',
zip : '123456',
phoneNumber : {
homePhone: 8797912345,
workPhone : 1234509876
}
}
}
So when we do deep clone then we should copy every property (including the nested object).
undefined
object property in JavaScript.Answer
Suppose we have given an object
person
var person = {
name: 'Nishant',
age : 24
}
Here the person
object has a name
and age
property. Now we are trying to access the salary property which we haven't declared on the person object so while accessing it will return undefined. So how we will ensure whether property is undefined or not before performing some operation over it?
Explanation:
We can use typeof
operator to check undefined
if(typeof someProperty === 'undefined'){
console.log('something is undefined here');
}
Now we are trying to access salary property of person object.
if(typeof person.salary === 'undefined'){
console.log("salary is undefined here because we haven't declared");
}
Clone
which takes an object and creates a object copy of it but not copy deep property of object. var objectLit = {foo : 'Bar'};
var cloneObj = Clone(obj); // Clone is the function which you have to write
console.log(cloneObj === Clone(objectLit)); // this should return false
console.log(cloneObj == Clone(objectLit)); // this should return true
Answer
function Clone(object){
var newObject = {};
for(var key in object){
newObject[key] = object[key];
}
return newObject;
}
Answer
We use promises for handling asynchronous interactions in a sequential manner. They are especially useful when we need to do an async operation and THEN do another async operation based on the results of the first one. For example, if you want to request the list of all flights and then for each flight you want to request some details about it. The promise represents the future value. It has an internal state (pending
, fulfilled
and rejected
) and works like a state machine.
A promise object has then
method, where you can specify what to do when the promise is fulfilled or rejected.
You can chain then()
blocks, thus avoiding the callback hell. You can handle errors in the catch()
block. After a promise is set to fulfilled or rejected state, it becomes immutable.
Also mention that you know about more sophisticated concepts:
async/await
which makes the code appear even more linearBe sure that you can implement the promise, read one of the articles on a topic, and learn the source code of the simplest promise implementation.
Answer
Let say we have person
object with property name and age
var person = {
name: 'Nishant',
age: 24
}
Now we want to check whether name
property exist in person
object or not ?
In JavaScript object can have own property, in above example name and age is own property of person object. Object also have some of inherited property of base object like toString is inherited property of person object.
So how we will check whether property is own property or inherited property.
Method 1: We can use in
operator on objet to check own property or inherited property.
console.log('name' in person); // checking own property print true
console.log('salary' in person); // checking undefined property print false
in
operator also look into inherited property if it doesn't find property defined as own property. For instance If I check existence of toString property as we know that we haven't declared this property on person object so in
operator look into there base property.
Here
console.log('toString' in person); // Will print true
If we want to test property of object instance not inherited properties then we will use hasOwnProperty
method of object instance.
console.log(person.hasOwnProperty('toString')); // print false
console.log(person.hasOwnProperty('name')); // print true
console.log(person.hasOwnProperty('salary')); // print false
Answer
NaN
stands for “not a number.” and it can break your table of numbers when it has an arithmetic operation that is not allowed. Here are some examples of how you can get NaN
:
Math.sqrt(-5);
Math.log(-1);
parseFloat("foo"); /* this is common: you get JSON from the server, convert some strings from JSON to a number and end up with NaN in your UI. */
NaN
is not equal to any number, it’s not less or more than any number, also it's not equal to itself:
NaN !== NaN
NaN < 2 // false
NaN > 2 // false
NaN === 2 // false
To check if the current value of the variable is NaN, you have to use the isNaN
function. This is why we can often see NaN in the webpages: it requires special check which a lot of developers forget to do.
Further reading: great blogpost on ariya.io
var arr = [10, 32, 65, 2];
for (var i = 0; i < arr.length; i++) {
setTimeout(function() {
console.log('The index of this number is: ' + i);
}, 3000);
}
Answer
For ES6, you can just replace var i
with let i
.
For ES5, you need to create a function scope like here:
var arr = [10, 32, 65, 2];
for (var i = 0; i < arr.length; i++) {
setTimeout(function(j) {
return function () {
console.log('The index of this number is: ' + j)
};
}(i), 3000);
}
This can also achieve by forEach (allows you to keep that variable within the forEach’s scope)
var arr = [10, 32, 65, 2];
arr.forEach(function(ele, i) {
setTimeout(function() {
console.log('The index of this number is: ' + i);
}, 3000);
})
Answer
We always encounter in such situation where we need to know whether value is type of array or not.
For instance : the code below perform some operation based value type
function(value){
if("value is an array"){
// Then perform some operation
}else{
// otherwise
}
}
Let's discuss some way to detect an array in JavaScript.
Method 1:
Juriy Zaytsev (Also known as kangax) proposed an elegant solution to this.
function isArray(value){
return Object.prototype.toString.call(value) === '[object Array]';
}
This approach is most popular way to detecting a value of type array in JavaScript and recommended to use. This approach relies on the fact that, native toString() method on a given value produce a standard string in all browser.
Method 2:
Duck typing test for array type detection
// Duck typing arrays
function isArray(value){
return typeof value.sort === 'function';
}
As we can see above isArray method will return true if value object have sort
method of type function
. Now assume you have created a object with sort method
var bar = {
sort: function(){
// Some code
}
}
Now when you check isArray(bar)
then it will return true because bar object has sort method, But the fact is bar is not an array.
So this method is not a best way to detect an array as you can see it's not handle the case when some object has sort method.
Method 3:
ECMAScript 5 has introduced Array.isArray() method to detect an array type value. The sole purpose of this method is accurately detecting whether a value is an array or not.
In many JavaScript libraries you may see the code below for detecting an value of type array.
function(value){
// ECMAScript 5 feature
if(typeof Array.isArray === 'function'){
return Array.isArray(value);
}else{
return Object.prototype.toString.call(value) === '[object Array]';
}
}
Method 4:
You can query the constructor name:
function isArray(value) {
return value.constructor.name === "Array";
}
Method 5:
You check if a given value is an instanceof Array
:
function isArray(value) {
return value instanceof Array;
}
Answer
In Javascript Object are called as reference type, Any value other then primitive is definitely a reference type. There are several built-in reference type such as Object, Array, Function, Date, null and Error.
Detecting object using typeof
operator
console.log(typeof {}); // object
console.log(typeof []); // object
console.log(typeof new Array()); // object
console.log(typeof null); // object
console.log(typeof new RegExp()); // object
console.log(typeof new Date()); // object
But the downside of using typeof operator to detect an object is that typeof returns object
for null
(However this is fact that null is an object in JavaScript).
The best way to detect an object of specific reference type using instanceof
operator.
Syntax : value instanceof constructor
//Detecting an array
if(value instanceof Array){
console.log("value is type of array");
}
// Employee constructor function
function Employee(name){
this.name = name; // Public property
}
var emp1 = new Employee('John');
console.log(emp1 instanceof Employee); // true
instanceof
not only check the constructor which is used to create an object but also check it's prototype chain see below example.
console.log(emp1 instanceof Object); // true
Answer
The ECMAScript 5 Object.create() method is the easiest way for one object to inherit from another, without invoking a constructor function.
For instance:
var employee = {
name: 'Nishant',
displayName: function () {
console.log(this.name);
}
};
var emp1 = Object.create(employee);
console.log(emp1.displayName()); // output "Nishant"
In the example above, we create a new object emp1
that inherits from employee
. In other words emp1
's prototype is set to employee
. After this emp1 is able to access the same properties and method on employee until new properties or method with the same name are defined.
For instance: Defining displayName()
method on emp1
will not automatically override the employee displayName
.
emp1.displayName = function() {
console.log('xyz-Anonymous');
};
employee.displayName(); //Nishant
emp1.displayName();//xyz-Anonymous
In addition to this Object.create(
) method also allows to specify a second argument which is an object containing additional properties and methods to add to the new object.
For example
var emp1 = Object.create(employee, {
name: {
value: "John"
}
});
emp1.displayName(); // "John"
employee.displayName(); // "Nishant"
In the example above, emp1
is created with it's own value for name, so calling displayName() method will display "John"
instead of "Nishant"
.
Object created in this manner give you full control over newly created object. You are free to add, remove any properties and method you want.
Answer
Let say we have Person
class which has name, age, salary properties and incrementSalary() method.
function Person(name, age, salary) {
this.name = name;
this.age = age;
this.salary = salary;
this.incrementSalary = function (byValue) {
this.salary = this.salary + byValue;
};
}
Now we wish to create Employee class which contains all the properties of Person class and wanted to add some additional properties into Employee class.
function Employee(company){
this.company = company;
}
//Prototypal Inheritance
Employee.prototype = new Person("Nishant", 24,5000);
In the example above, Employee type inherits from Person. It does so by assigning a new instance of Person
to Employee
prototype. After that, every instance of Employee
inherits its properties and methods from Person
.
//Prototypal Inheritance
Employee.prototype = new Person("Nishant", 24,5000);
var emp1 = new Employee("Google");
console.log(emp1 instanceof Person); // true
console.log(emp1 instanceof Employee); // true
Let's understand Constructor inheritance
//Defined Person class
function Person(name){
this.name = name || "Nishant";
}
var obj = {};
// obj inherit Person class properties and method
Person.call(obj); // constructor inheritance
console.log(obj); // Object {name: "Nishant"}
Here we saw calling Person.call(obj) define the name properties from Person
to obj
.
console.log(name in obj); // true
Type-based inheritance is best used with developer defined constructor function rather than natively in JavaScript. In addition to this also allows flexibility in how we create similar type of object.
Answer
ECMAScript 5 introduce several methods to prevent modification of object which lock down object to ensure that no one, accidentally or otherwise, change functionality of Object.
There are three levels of preventing modification:
1: Prevent extensions :
No new properties or methods can be added to the object, but one can change the existing properties and method.
For example:
var employee = {
name: "Nishant"
};
// lock the object
Object.preventExtensions(employee);
// Now try to change the employee object property name
employee.name = "John"; // work fine
//Now try to add some new property to the object
employee.age = 24; // fails silently unless it's inside the strict mode
2: Seal :
It is same as prevent extension, in addition to this also prevent existing properties and methods from being deleted.
To seal an object, we use Object.seal() method. you can check whether an object is sealed or not using Object.isSealed();
var employee = {
name: "Nishant"
};
// Seal the object
Object.seal(employee);
console.log(Object.isExtensible(employee)); // false
console.log(Object.isSealed(employee)); // true
delete employee.name // fails silently unless it's in strict mode
// Trying to add new property will give an error
employee.age = 30; // fails silently unless in strict mode
when an object is sealed, its existing properties and methods can't be removed. Sealed object are also non-extensible.
3: Freeze :
Same as seal, In addition to this prevent existing properties methods from being modified (All properties and methods are read only).
To freeze an object, use Object.freeze() method. We can also determine whether an object is frozen using Object.isFrozen();
var employee = {
name: "Nishant"
};
//Freeze the object
Object.freeze(employee);
// Seal the object
Object.seal(employee);
console.log(Object.isExtensible(employee)); // false
console.log(Object.isSealed(employee)); // true
console.log(Object.isFrozen(employee)); // true
employee.name = "xyz"; // fails silently unless in strict mode
employee.age = 30; // fails silently unless in strict mode
delete employee.name // fails silently unless it's in strict mode
Frozen objects are considered both non-extensible and sealed.
Recommended:
If you are decided to prevent modification, sealed, freeze the object then use in strict mode so that you can catch the error.
For example:
"use strict";
var employee = {
name: "Nishant"
};
//Freeze the object
Object.freeze(employee);
// Seal the object
Object.seal(employee);
console.log(Object.isExtensible(employee)); // false
console.log(Object.isSealed(employee)); // true
console.log(Object.isFrozen(employee)); // true
employee.name = "xyz"; // fails silently unless in strict mode
employee.age = 30; // fails silently unless in strict mode
delete employee.name; // fails silently unless it's in strict mode
(your message)
to every message you log using console.log ?For example, If you log console.log("Some message")
then output should be (your message) Some message
Answer
Logging error message or some informative message is always required when you dealing with client side JavaScript using console.log method. Some time you want to add some prefix to identify message generated log from your application hence you would like to prefix your app name in every console.log.
A general way to do this keep adding your app name in every console.log message like
console.log('your app name' + 'some error message');
But doing in this way you have to write your app name everytime when you log message using console.
There are some best way we can achieve this
function appLog() {
var args = Array.prototype.slice.call(arguments);
args.unshift('your app name');
console.log.apply(console, args);
}
appLog("Some error message");
//output of above console: 'your app name Some error message'
For example: We can create string using string literal and using String constructor function.
// using string literal
var ltrlStr = "Hi I am string literal";
// using String constructor function
var objStr = new String("Hi I am string object");
Answer
We can use typeof operator to test string literal and instanceof operator to test String object.
function isString(str) {
return typeof(str) == 'string' || str instanceof String;
}
var ltrlStr = "Hi I am string literal";
var objStr = new String("Hi I am string object");
console.log(isString(ltrlStr)); // true
console.log(isString(objStr)); // true
Answer
Anonymous functions basically used in following scenario.
No name is needed if function is only used in one place, then there is no need to add a name to function.
Let's take the example of setTimeout function
Here there is no need of using named function when we are sure that function which will alert hello
would use only once in application.
setTimeout(function(){
alert("Hello");
},1000);
Anonymous functions are declared inline and inline functions have advantages in the case that they can access variable in the parent scopes.
Let's take a example of event handler. Notify event of particular type (such as click) for a given object.
Let say we have HTML element (button) on which we want to add click event and when user do click on button we would like to execute some logic.
Add Event Listener
Above example shows used of anonymous function as a callback function in event handler.
var btn = document.getElementById('myBtn');
btn.addEventListener('click', function () {
alert('button clicked');
});
<button id="myBtn"></button>
Passing anonymous function as a parameter to calling function.
Example:
// Function which will execute callback function
function processCallback(callback){
if(typeof callback === 'function'){
callback();
}
}
// Call function and pass anonymous function as callback
processCallback(function(){
alert("Hi I am anonymous callback function");
});
The best way to make a decision for using anonymous function is to ask the following question:
Will the function which I am going to define, be used anywhere else?
If your answer is yes then go and create named function rather anonymous function.
Advantage of using anonymous function:
Answer
If you are coming from python/c# you might be using default value for function parameter incase value(formal parameter) has not been passed. For instance :
// Define sentEmail function
// configuration : Configuration object
// provider : Email Service provider, Default would be gmail
def sentEmail(configuration, provider = 'Gmail'):
# Your code logic
In Pre ES6/ES2015
There are a lot of ways by which you can achieve this in pre ES2015.
Let's understand the code below by which we achieved setting default parameter value.
Method 1: Setting default parameter value
function sentEmail(configuration, provider) {
// Set default value if user has not passed value for provider
provider = typeof provider !== 'undefined' ? provider : 'Gmail'
// Your code logic
;
}
// In this call we are not passing provider parameter value
sentEmail({
from: 'xyz@gmail.com',
subject: 'Test Email'
});
// Here we are passing Yahoo Mail as a provider value
sentEmail({
from: 'xyz@gmail.com',
subject: 'Test Email'
}, 'Yahoo Mail');
Method 2: Setting default parameter value
function sentEmail(configuration, provider) {
// Set default value if user has not passed value for provider
provider = provider || 'Gmail'
// Your code logic
;
}
// In this call we are not passing provider parameter value
sentEmail({
from: 'xyz@gmail.com',
subject: 'Test Email'
});
// Here we are passing Yahoo Mail as a provider value
sentEmail({
from: 'xyz@gmail.com',
subject: 'Test Email'
}, 'Yahoo Mail');
Method 3: Setting default parameter value in ES6
function sendEmail(configuration, provider = "Gmail") {
// Set default value if user has not passed value for provider
// Value of provider can be accessed directly
console.log(`Provider: ${provider}`);
}
// In this call we are not passing provider parameter value
sentEmail({
from: 'xyz@gmail.com',
subject: 'Test Email'
});
// Here we are passing Yahoo Mail as a provider value
sentEmail({
from: 'xyz@gmail.com',
subject: 'Test Email'
}, 'Yahoo Mail');
Let say you have two objects
var person = {
name : 'John',
age : 24
}
var address = {
addressLine1 : 'Some Location x',
addressLine2 : 'Some Location y',
city : 'NewYork'
}
Write merge function which will take two object and add all the own property of second object into first object.
Answer
merge(person , address);
/* Now person should have 5 properties
name , age , addressLine1 , addressLine2 , city */
Method 1: Using ES6, Object.assign method
const merge = (toObj, fromObj) => Object.assign(toObj, fromObj);
Method 2: Without using built-in function
function merge(toObj, fromObj) {
// Make sure both of the parameter is an object
if (typeof toObj === 'object' && typeof fromObj === 'object') {
for (var pro in fromObj) {
// Assign only own properties not inherited properties
if (fromObj.hasOwnProperty(pro)) {
// Assign property and value
toObj[pro] = fromObj[pro];
}
}
}else{
throw "Merge function can apply only on object";
}
}
Answer
Object can have properties that don't show up when you iterate through object using for...in loop or using Object.keys() to get an array of property names. This properties is know as non-enumerable properties.
Let say we have following object
var person = {
name: 'John'
};
person.salary = '10000$';
person['country'] = 'USA';
console.log(Object.keys(person)); // ['name', 'salary', 'country']
As we know that person object properties name
, salary
,country
are enumerable hence it's shown up when we called Object.keys(person).
To create a non-enumerable property we have to use Object.defineProperty(). This is a special method for creating non-enumerable property in JavaScript.
var person = {
name: 'John'
};
person.salary = '10000$';
person['country'] = 'USA';
// Create non-enumerable property
Object.defineProperty(person, 'phoneNo',{
value : '8888888888',
enumerable: false
})
Object.keys(person); // ['name', 'salary', 'country']
In the example above phoneNo
property didn't show up because we made it non-enumerable by setting enumerable:false
Bonus
Now let's try to change value of phoneNo
person.phoneNo = '7777777777';
Object.defineProperty() also lets you create read-only properties as we saw above, we are not able to modify phoneNo value of a person object. This is because descriptor has writable property, which is false
by default. Changing non-writable property value will return error in strict mode. In non-strict mode it won't through any error but it won't change the value of phoneNo.
Answer
Function binding falls in advance JavaScript category and this is very popular technique to use in conjunction with event handler and callback function to preserve code execution context while passing function as a parameter.
Let's consider the following example:
var clickHandler = {
message: 'click event handler',
handleClick: function(event) {
console.log(this.message);
}
};
var btn = document.getElementById('myBtn');
// Add click event to btn
btn.addEventListener('click', clickHandler.handleClick);
Here in this example clickHandler object is created which contain message properties and handleClick method.
We have assigned handleClick method to a DOM button, which will be executed in response of click. When the button is clicked, then handleClick method is being called and console message. Here console.log should log the click event handler
message but it actually log undefined
.
The problem of displaying undefined
is because of the execution context of clickHandler.handleClick method is not being saved hence this
pointing to button btn
object. We can fix this issue using bind method.
var clickHandler = {
message: 'click event handler',
handleClick: function(event) {
console.log(this.message);
}
};
var btn = document.getElementById('myBtn');
// Add click event to btn and bind the clickHandler object
btn.addEventListener('click', clickHandler.handleClick.bind(clickHandler));
bind
method is available to all the function similar to call and apply method which take argument value of this
.
Coding Questions
For a JS developer, it's crucially important to understand which values are passed by reference, and which ones are passed by value. Remember that objects, including arrays are passed by reference while strings, booleans and numbers are passed by value.
var strA = "hi there";
var strB = strA;
strB="bye there!";
console.log (strA)
Answer
The output will be 'hi there'
because we're dealing with strings here. Strings are passed by value, that is, copied.
var objA = {prop1: 42};
var objB = objA;
objB.prop1 = 90;
console.log(objA)
Answer
The output will be {prop1: 90}
because we're dealing with objects here. Objects are passed by reference, that is, objA
and objB
point to the same object in memory.
var objA = {prop1: 42};
var objB = objA;
objB = {};
console.log(objA)
Answer
The output will be {prop1: 42}
.
When we assign objA
to objB
, the objB
variable will point to the same object as the objB
variable.
However, when we reassign objB
to an empty object, we simply change where objB
variable references to. This doesn't affect where objA
variable references to.
var arrA = [0,1,2,3,4,5];
var arrB = arrA;
arrB[0]=42;
console.log(arrA)
Answer
The output will be [42,1,2,3,4,5]
.
Arrays are object in JavaScript and they are passed and assigned by reference. This is why both arrA
and arrB
point to the same array [0,1,2,3,4,5]
. That's why changing the first element of the arrB
will also modify arrA
: it's the same array in the memory.
var arrA = [0,1,2,3,4,5];
var arrB = arrA.slice();
arrB[0]=42;
console.log(arrA)
Answer
The output will be [0,1,2,3,4,5]
.
The slice
function copies all the elements of the array returning the new array. That's why arrA
and arrB
reference two completely different arrays.
var arrA = [{prop1: "value of array A!!"}, {someProp: "also value of array A!"}, 3,4,5];
var arrB = arrA;
arrB[0].prop1=42;
console.log(arrA);
Answer
The output will be [{prop1: 42}, {someProp: "also value of array A!"}, 3,4,5]
.
Arrays are object in JS, so both varaibles arrA and arrB point to the same array. Changing arrB[0]
is the same as changing arrA[0]
var arrA = [{prop1: "value of array A!!"}, {someProp: "also value of array A!"},3,4,5];
var arrB = arrA.slice();
arrB[0].prop1=42;
arrB[3] = 20;
console.log(arrA);
Answer
The output will be [{prop1: 42}, {someProp: "also value of array A!"}, 3,4,5]
.
The slice
function copies all the elements of the array returning the new array. However, it doesn't do deep copying. Instead it does shallow copying. You can imagine slice implemented like this:
function slice(arr) {
var result = [];
for (i = 0; i< arr.length; i++) {
result.push(arr[i]);
}
return result;
}
Look at the line with result.push(arr[i])
. If arr[i]
happens to be a number or string, it will be passed by value, in other words, copied. If arr[i]
is an object, it will be passed by reference.
In case of our array arr[0]
is an object {prop1: "value of array A!!"}
. Only the reference to this object will be copied. This effectively means that arrays arrA and arrB share first two elements.
This is why changing the property of arrB[0]
in arrB
will also change the arrA[0]
.
Answer
console.log(employeeId);
var employeeId = '19000';
Answer
var employeeId = '1234abe';
(function(){
console.log(employeeId);
var employeeId = '122345';
})();
Answer
var employeeId = '1234abe';
(function() {
console.log(employeeId);
var employeeId = '122345';
(function() {
var employeeId = 'abc1234';
}());
}());
Answer
(function() {
console.log(typeof displayFunc);
var displayFunc = function(){
console.log("Hi I am inside displayFunc");
}
}());
Answer
var employeeId = 'abc123';
function foo(){
employeeId = '123bcd';
return;
}
foo();
console.log(employeeId);
Answer
var employeeId = 'abc123';
function foo() {
employeeId = '123bcd';
return;
function employeeId() {}
}
foo();
console.log(employeeId);
Answer
var employeeId = 'abc123';
function foo() {
employeeId();
return;
function employeeId() {
console.log(typeof employeeId);
}
}
foo();
Answer
function foo() {
employeeId();
var product = 'Car';
return;
function employeeId() {
console.log(product);
}
}
foo();
Answer
(function foo() {
bar();
function bar() {
abc();
console.log(typeof abc);
}
function abc() {
console.log(typeof bar);
}
}());
Answer
(function() {
'use strict';
var person = {
name: 'John'
};
person.salary = '10000$';
person['country'] = 'USA';
Object.defineProperty(person, 'phoneNo', {
value: '8888888888',
enumerable: true
})
console.log(Object.keys(person));
})();
Answer
(function() {
'use strict';
var person = {
name: 'John'
};
person.salary = '10000$';
person['country'] = 'USA';
Object.defineProperty(person, 'phoneNo', {
value: '8888888888',
enumerable: false
})
console.log(Object.keys(person));
})();
Answer
(function() {
var objA = {
foo: 'foo',
bar: 'bar'
};
var objB = {
foo: 'foo',
bar: 'bar'
};
console.log(objA == objB);
console.log(objA === objB);
}());
Answer
(function() {
var objA = new Object({foo: "foo"});
var objB = new Object({foo: "foo"});
console.log(objA == objB);
console.log(objA === objB);
}());
Answer
(function() {
var objA = Object.create({
foo: 'foo'
});
var objB = Object.create({
foo: 'foo'
});
console.log(objA == objB);
console.log(objA === objB);
}());
Answer
(function() {
var objA = Object.create({
foo: 'foo'
});
var objB = Object.create(objA);
console.log(objA == objB);
console.log(objA === objB);
}());
Answer
(function() {
var objA = Object.create({
foo: 'foo'
});
var objB = Object.create(objA);
console.log(objA.toString() == objB.toString());
console.log(objA.toString() === objB.toString());
}());
Answer
(function() {
var objA = Object.create({
foo: 'foo'
});
var objB = objA;
console.log(objA == objB);
console.log(objA === objB);
console.log(objA.toString() == objB.toString());
console.log(objA.toString() === objB.toString());
}());
Answer
(function() {
var objA = Object.create({
foo: 'foo'
});
var objB = objA;
objB.foo = 'bar';
console.log(objA.foo);
console.log(objB.foo);
}());
Answer
(function() {
var objA = Object.create({
foo: 'foo'
});
var objB = objA;
objB.foo = 'bar';
delete objA.foo;
console.log(objA.foo);
console.log(objB.foo);
}());
Answer
(function() {
var objA = {
foo: 'foo'
};
var objB = objA;
objB.foo = 'bar';
delete objA.foo;
console.log(objA.foo);
console.log(objB.foo);
}());
Answer
(function() {
var array = new Array('100');
console.log(array);
console.log(array.length);
}());
Answer
(function() {
var array1 = [];
var array2 = new Array(100);
var array3 = new Array(['1',2,'3',4,5.6]);
console.log(array1);
console.log(array2);
console.log(array3);
console.log(array3.length);
}());
Answer
(function () {
var array = new Array('a', 'b', 'c', 'd', 'e');
array[10] = 'f';
delete array[10];
console.log(array.length);
}());
Answer
(function(){
var animal = ['cow','horse'];
animal.push('cat');
animal.push('dog','rat','goat');
console.log(animal.length);
})();
Answer
(function(){
var animal = ['cow','horse'];
animal.push('cat');
animal.unshift('dog','rat','goat');
console.log(animal);
})();
Answer
(function(){
var array = [1,2,3,4,5];
console.log(array.indexOf(2));
console.log([{name: 'John'},{name : 'John'}].indexOf({name:'John'}));
console.log([[1],[2],[3],[4]].indexOf([3]));
console.log("abcdefgh".indexOf('e'));
})();
Answer
(function(){
var array = [1,2,3,4,5,1,2,3,4,5,6];
console.log(array.indexOf(2));
console.log(array.indexOf(2,3));
console.log(array.indexOf(2,10));
})();
Answer
(function(){
var numbers = [2,3,4,8,9,11,13,12,16];
var even = numbers.filter(function(element, index){
return element % 2 === 0;
});
console.log(even);
var containsDivisibleby3 = numbers.some(function(element, index){
return element % 3 === 0;
});
console.log(containsDivisibleby3);
})();
Answer
(function(){
var containers = [2,0,false,"", '12', true];
var containers = containers.filter(Boolean);
console.log(containers);
var containers = containers.filter(Number);
console.log(containers);
var containers = containers.filter(String);
console.log(containers);
var containers = containers.filter(Object);
console.log(containers);
})();
Answer
(function(){
var list = ['foo','bar','john','ritz'];
console.log(list.slice(1));
console.log(list.slice(1,3));
console.log(list.slice());
console.log(list.slice(2,2));
console.log(list);
})();
Answer
(function(){
var list = ['foo','bar','john'];
console.log(list.splice(1));
console.log(list.splice(1,2));
console.log(list);
})();
Answer
(function(){
var arrayNumb = [2, 8, 15, 16, 23, 42];
arrayNumb.sort();
console.log(arrayNumb);
})();
Answer
function funcA(){
console.log("funcA ", this);
(function innerFuncA1(){
console.log("innerFunc1", this);
(function innerFunA11(){
console.log("innerFunA11", this);
})();
})();
}
console.log(funcA());
Answer
1)
var obj = {
message: "Hello",
innerMessage: !(function() {
console.log(this.message);
})()
};
console.log(obj.innerMessage);
Answer
var obj = {
message: "Hello",
innerMessage: function() {
return this.message;
}
};
console.log(obj.innerMessage());
Answer
var obj = {
message: 'Hello',
innerMessage: function () {
(function () {
console.log(this.message);
}());
}
};
console.log(obj.innerMessage());
Answer
var obj = {
message: 'Hello',
innerMessage: function () {
var self = this;
(function () {
console.log(self.message);
}());
}
};
console.log(obj.innerMessage());
Answer
function myFunc(){
console.log(this.message);
}
myFunc.message = "Hi John";
console.log(myFunc());
Answer
function myFunc(){
console.log(myFunc.message);
}
myFunc.message = "Hi John";
console.log(myFunc());
Answer
function myFunc() {
myFunc.message = 'Hi John';
console.log(myFunc.message);
}
console.log(myFunc());
Answer
function myFunc(param1,param2) {
console.log(myFunc.length);
}
console.log(myFunc());
console.log(myFunc("a","b"));
console.log(myFunc("a","b","c","d"));
Answer
a) 2 2 2
function myFunc() {
console.log(arguments.length);
}
console.log(myFunc());
console.log(myFunc("a","b"));
console.log(myFunc("a","b","c","d"));
Answer
function Person(name, age){
this.name = name || "John";
this.age = age || 24;
this.displayName = function(){
console.log(this.name);
}
}
Person.name = "John";
Person.displayName = function(){
console.log(this.name);
}
var person1 = new Person('John');
person1.displayName();
Person.displayName();
Answer
function passWordMngr() {
var password = '12345678';
this.userName = 'John';
return {
pwd: password
};
}
// Block End
var userInfo = passWordMngr();
console.log(userInfo.pwd);
console.log(userInfo.userName);
Answer
var employeeId = 'aq123';
function Employee() {
this.employeeId = 'bq1uy';
}
console.log(Employee.employeeId);
Answer
var employeeId = 'aq123';
function Employee() {
this.employeeId = 'bq1uy';
}
console.log(new Employee().employeeId);
Employee.prototype.employeeId = 'kj182';
Employee.prototype.JobId = '1BJKSJ';
console.log(new Employee().JobId);
console.log(new Employee().employeeId);
Answer
var employeeId = 'aq123';
(function Employee() {
try {
throw 'foo123';
} catch (employeeId) {
console.log(employeeId);
}
console.log(employeeId);
}());
Answer
(function() {
var greet = 'Hello World';
var toGreet = [].filter.call(greet, function(element, index) {
return index > 5;
});
console.log(toGreet);
}());
Answer
(function() {
var fooAccount = {
name: 'John',
amount: 4000,
deductAmount: function(amount) {
this.amount -= amount;
return 'Total amount left in account: ' + this.amount;
}
};
var barAccount = {
name: 'John',
amount: 6000
};
var withdrawAmountBy = function(totalAmount) {
return fooAccount.deductAmount.bind(barAccount, totalAmount);
};
console.log(withdrawAmountBy(400)());
console.log(withdrawAmountBy(300)());
}());
Answer
(function() {
var fooAccount = {
name: 'John',
amount: 4000,
deductAmount: function(amount) {
this.amount -= amount;
return this.amount;
}
};
var barAccount = {
name: 'John',
amount: 6000
};
var withdrawAmountBy = function(totalAmount) {
return fooAccount.deductAmount.apply(barAccount, [totalAmount]);
};
console.log(withdrawAmountBy(400));
console.log(withdrawAmountBy(300));
console.log(withdrawAmountBy(200));
}());
Answer
(function() {
var fooAccount = {
name: 'John',
amount: 6000,
deductAmount: function(amount) {
this.amount -= amount;
return this.amount;
}
};
var barAccount = {
name: 'John',
amount: 4000
};
var withdrawAmountBy = function(totalAmount) {
return fooAccount.deductAmount.call(barAccount, totalAmount);
};
console.log(withdrawAmountBy(400));
console.log(withdrawAmountBy(300));
console.log(withdrawAmountBy(200));
}());
Answer
(function greetNewCustomer() {
console.log('Hello ' + this.name);
}.bind({
name: 'John'
})());
Answer
function getDataFromServer(apiUrl){
var name = "John";
return {
then : function(fn){
fn(name);
}
}
}
getDataFromServer('www.google.com').then(function(name){
console.log(name);
});
Answer
(function(){
var arrayNumb = [2, 8, 15, 16, 23, 42];
Array.prototype.sort = function(a,b){
return a - b;
};
arrayNumb.sort();
console.log(arrayNumb);
})();
(function(){
var numberArray = [2, 8, 15, 16, 23, 42];
numberArray.sort(function(a,b){
if(a == b){
return 0;
}else{
return a < b ? -1 : 1;
}
});
console.log(numberArray);
})();
(function(){
var numberArray = [2, 8, 15, 16, 23, 42];
numberArray.sort(function(a,b){
return a-b;
});
console.log(numberArray);
})();
Answer
(function(){
function sayHello(){
var name = "Hi John";
return
{
fullName: name
}
}
console.log(sayHello().fullName);
})();
Answer
function getNumber(){
return (2,4,5);
}
var numb = getNumber();
console.log(numb);
Answer
function getNumber(){
return;
}
var numb = getNumber();
console.log(numb);
Answer
function mul(x){
return function(y){
return [x*y, function(z){
return x*y + z;
}];
}
}
console.log(mul(2)(3)[0]);
console.log(mul(2)(3)[1](4));
Answer
function mul(x) {
return function(y) {
return {
result: x * y,
sum: function(z) {
return x * y + z;
}
};
};
}
console.log(mul(2)(3).result);
console.log(mul(2)(3).sum(4));
Answer
function mul(x) {
return function(y) {
return function(z) {
return function(w) {
return function(p) {
return x * y * z * w * p;
};
};
};
};
}
console.log(mul(2)(3)(4)(5)(6));
Answer
function getName1(){
console.log(this.name);
}
Object.prototype.getName2 = () =>{
console.log(this.name)
}
let personObj = {
name:"Tony",
print:getName1
}
personObj.print();
personObj.getName2();
Answer
Explaination: getName1() function works fine because it's being called from personObj, so it has access to this.name property. But when while calling getnName2 which is defined under Object.prototype doesn't have any proprty named this.name. There should be name property under prototype. Following is the code:
function getName1(){
console.log(this.name);
}
Object.prototype.getName2 = () =>{
console.log(Object.getPrototypeOf(this).name);
}
let personObj = {
name:"Tony",
print:getName1
}
personObj.print();
Object.prototype.name="Steve";
personObj.getName2();
We always appreciate your feedback on how the book can be improved, and more questions can be added. If you think you have some question then please add that and open a pull request.
Author: Ganqqwerty
Source Code: https://github.com/ganqqwerty/123-Essential-JavaScript-Interview-Questions
License: BSD-3-Clause license
1652927735
This blog features the most frequently asked DevOps interview questions and answers that you must prepare to ace your interview
Are you a DevOps engineer or thinking of getting into DevOps? Well then, you have landed on the right article. In DevOps Interview Questions article, I have listed out dozens of possible questions that interviewers ask potential DevOps hires.
The crucial thing to understand is that DevOps is not merely a collection of technologies but rather a way of thinking, a culture. DevOps requires a cultural shift that merges operations with development and demands a linked toolchain of technologies to facilitate collaborative change. Since the DevOps philosophy is still at a very nascent stage, application of DevOps as well as the bandwidth required to adapt and collaborate, varies from organization to organization. However, you can develop a portfolio of DevOps skills that can present you as a perfect candidate for any type of organization.
If you want to develop your DevOps skills in a thoughtful, structured manner and get certified as a DevOps Engineer, we would be glad to help. Once you finish the DevOps certification training, we promise that you will be able to handle a variety of DevOps roles in the industry.
What are the requirements to become a DevOps Engineer?
When looking to fill out DevOps roles, organizations look for a clear set of skills. The most important of these are:
If you have the above skills, then you are ready to start preparing for your DevOps interview! If not, don’t worry – our DevOps certification training will help you master DevOps.
In order to structure the questions below, I put myself in your shoes. Most of the answers in this blog are written from your perspective, i.e. someone who is a potential DevOps expert. I have also segregated the questions in the following manner:
These are the top questions you might face in a DevOps job interview:
General DevOps Interview Questions
This category will include questions that are not related to any particular DevOps stage. Questions here are meant to test your understanding about DevOps rather than focusing on a particular tool or a stage.
The differences between the two are listed down in the table below.
Features | DevOps | Agile |
---|---|---|
Agility | Agility in both Development & Operations | Agility in only Development |
Processes/ Practices | Involves processes such as CI, CD, CT, etc. | Involves practices such as Agile Scrum, Agile Kanban, etc. |
Key Focus Area | Timeliness & quality have equal priority | Timeliness is the main priority |
Release Cycles/ Development Sprints | Smaller release cycles with immediate feedback | Smaller release cycles |
Source of Feedback | Feedback is from self (Monitoring tools) | Feedback is from customers |
Scope of Work | Agility & need for Automation | Agility only |
According to me, this answer should start by explaining the general market trend. Instead of releasing big sets of features, companies are trying to see if small features can be transported to their customers through a series of release trains. This has many advantages like quick feedback from customers, better quality of software etc. which in turn leads to high customer satisfaction. To achieve this, companies are required to:
DevOps fulfills all these requirements and helps in achieving seamless software delivery. You can give examples of companies like Etsy, Google and Amazon which have adopted DevOps to achieve levels of performance that were unthinkable even five years ago. They are doing tens, hundreds or even thousands of code deployments per day while delivering world class stability, reliability and security.
If I have to test your knowledge on DevOps, you should know the difference between Agile and DevOps. The next question is directed towards that.
I would advise you to go with the below explanation:
Agile is a set of values and principles about how to produce i.e. develop software. Example: if you have some ideas and you want to turn those ideas into working software, you can use the Agile values and principles as a way to do that. But, that software might only be working on a developer’s laptop or in a test environment. You want a way to quickly, easily and repeatably move that software into production infrastructure, in a safe and simple way. To do that you need DevOps tools and techniques.
You can summarize by saying Agile software development methodology focuses on the development of software but DevOps on the other hand is responsible for development as well as deployment of the software in the safest and most reliable way possible.
Now remember, you have included DevOps tools in your previous answer so be prepared to answer some questions related to that.
The most popular DevOps tools are mentioned below:
You can also mention any other tool if you want, but make sure you include the above tools in your answer.
The second part of the answer has two possibilities:
Given below is a generic logical flow where everything gets automated for seamless delivery. However, this flow may vary from organization to organization as per the requirement.
For this answer, you can use your past experience and explain how DevOps helped you in your previous job. If you don’t have any such experience, then you can mention the below advantages.
Technical benefits:
Business benefits:
According to me, the most important thing that DevOps helps us achieve is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. This is the primary objective of DevOps.
However, you can add many other positive effects of DevOps. For example, clearer communication and better working relationships between teams i.e. both the Ops team and Dev team collaborate together to deliver good quality software which in turn leads to higher customer satisfaction.
There are many industries that are using DevOps so you can mention any of those use cases, you can also refer the below example:
Etsy is a peer-to-peer e-commerce website focused on handmade or vintage items and supplies, as well as unique factory-manufactured items. Etsy struggled with slow, painful site updates that frequently caused the site to go down. It affected sales for millions of Etsy’s users who sold goods through online market place and risked driving them to the competitor.
With the help of a new technical management team, Etsy transitioned from its waterfall model, which produced four-hour full-site deployments twice weekly, to a more agile approach. Today, it has a fully automated deployment pipeline, and its continuous delivery practices have reportedly resulted in more than 50 deployments a day with fewer disruptions.
For this answer, share your past experience and try to explain how flexible you were in your previous job. You can refer the below example:
DevOps engineers almost always work in a 24/7 business-critical online environment. I was adaptable to on-call duties and was available to take up real-time, live-system responsibility. I successfully automated processes to support continuous software deployments. I have experience with public/private clouds, tools like Chef or Puppet, scripting and automation with tools like Python and PHP, and a background in Agile.
A pattern is common usage usually followed. If a pattern commonly adopted by others does not work for your organization and you continue to blindly follow it, you are essentially adopting an anti-pattern. There are myths about DevOps. Some of them include:
The various phases of the DevOps lifecycle are as follows:
KPI Means Key Performance Indicators are used to measure the performance of a DevOps team, identify mistakes and rectify them. This helps the DevOps team to increase productivity and which directly impacts revenue.
There are many KPIs which one can track in a DevOps team. Following are some of them:
As we know before DevOps there are two other software development models:
In the waterfall model, we have limitations of one-way working and lack of communication with customers. This was overcome in Agile by including the communication between the customer and the company by taking feedback. But in this model, another issue is faced regarding communication between the Development team and operations team due to which there is a delay in the speed of production. This is where DevOps is introduced. It bridges the gap between the development team and the operation team by including the automation feature. Due to this, the speed of production is increased. By including automation, testing is integrated into the development stage. Which resulted in finding the bugs at the very initial stage which increased the speed and efficiency.
AWS [Amazon Web Services ] is one of the famous cloud providers. In AWS DevOps is provided with some benefits:
Now let’s look at some interview questions on VCS.
This is probably the easiest question you will face in the interview. My suggestion is to first give a definition of Version control. It is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control.
Version control allows you to:
I will suggest you to include the following advantages of version control:
This question is asked to test your branching experience so tell them about how you have used branching in your previous job and what purpose does it serves, you can refer the below points:
In the end tell them that branching strategies varies from one organization to another, so I know basic branching operations like delete, merge, checking out a branch etc.
You can just mention the VCS tool that you have worked on like this: “I have worked on Git and one major advantage it has over other VCS tools like SVN is that it is a distributed version control system.”
Distributed VCS tools do not necessarily rely on a central server to store all the versions of a project’s files. Instead, every developer “clones” a copy of a repository and has the full history of the project on their own hard drive.
I will suggest that you attempt this question by first explaining about the architecture of git as shown in the below diagram. You can refer to the explanation given below:
Below are some basic Git commands:
There can be two answers to this question so make sure that you include both because any of the below options can be used depending on the situation:
There are two options to squash last N commits into a single commit. Include both of the below mentioned options in your answer:
I will suggest you to first give a small definition of Git bisect, Git bisect is used to find the commit that introduced a bug by using binary search. Command for Git bisect is
git bisect <subcommand> <options>
Now since you have mentioned the command above, explain what this command will do, This command uses a binary search algorithm to find which commit in your project’s history introduced a bug. You use it by first telling it a “bad” commit that is known to contain the bug, and a “good” commit that is known to be before the bug was introduced. Then Git bisect picks a commit between those two endpoints and asks you whether the selected commit is “good” or “bad”. It continues narrowing down the range until it finds the exact commit that introduced the change.
According to me, you should start by saying git rebase is a command which will merge another branch into the branch where you are currently working, and move all of the local commits that are ahead of the rebased branch to the top of the history on that branch.
Now once you have defined Git rebase time for an example to show how it can be used to resolve conflicts in a feature branch before merge, if a feature branch was created from master, and since then the master branch has received new commits, Git rebase can be used to move the feature branch to the tip of master.
The command effectively will replay the changes made in the feature branch at the tip of master, allowing conflicts to be resolved in the process. When done with care, this will allow the feature branch to be merged into master with relative ease and sometimes as a simple fast-forward operation.
I will suggest you to first give a small introduction to sanity checking, A sanity or smoke test determines whether it is possible and reasonable to continue testing.
Now explain how to achieve this, this can be done with a simple script related to the pre-commit hook of the repository. The pre-commit hook is triggered right before a commit is made, even before you are required to enter a commit message. In this script one can run other tools, such as linters and perform sanity checks on the changes being committed into the repository.
Finally give an example, you can refer the below script:
#!/bin/sh
files=$(git diff --cached --name-only --diff-filter=ACM | grep '.go$')
if [ -z files ]; then
exit 0
fi
unfmtd=$(gofmt -l $files)
if [ -z unfmtd ]; then
exit 0
fi
echo “Some .go files are not fmt’d”
exit 1</p>
<p style="text-align: justify;"><span>
This script checks to see if any .go file that is about to be committed needs to be passed through the standard Go source code formatting tool gofmt. By exiting with a non-zero status, the script effectively prevents the commit from being applied to the repository.
For this answer instead of just telling the command, explain what exactly this command will do so you can say that, To get a list files that has changed in a particular commit use command
git diff-tree -r {hash}
Given the commit hash, this will list all the files that were changed or added in that commit. The -r flag makes the command list individual files, rather than collapsing them into root directory names only.
You can also include the below mention point although it is totally optional but will help in impressing the interviewer.
The output will also include some extra information, which can be easily suppressed by including two flags:
git diff-tree –no-commit-id –name-only -r {hash}
Here –no-commit-id will suppress the commit hashes from appearing in the output, and –name-only will only print the file names, instead of their paths.
There are three ways to configure a script to run every time a repository receives new commits through push, one needs to define either a pre-receive, update, or a post-receive hook depending on when exactly the script needs to be triggered.
Hooks are local to every Git repository and are not versioned. Scripts can either be created within the hooks directory inside the “.git” directory, or they can be created elsewhere and links to those scripts can be placed within the directory.
I will suggest you to include both the below mentioned commands:
git branch –merged lists the branches that have been merged into the current branch.
git branch –no-merged lists the branches that have not been merged.
Centralized Version Control | Distributed Version control |
In this, we will not be having a copy of the main repository on the local repository. | In this, all the developers will be having a copy of the main repository on their local repository |
There is a need for the internet for accessing the main repository data because we will not be having another copy of the server on the local repository | There is no need for the internet for accessing the main repository data because we will be having another copy of the server on the local repository. |
If the main server crashes then there will be a problem in accessing the server for the developers. | If there is a crash on the main server there will be no problem faced regarding the availability of the server. |
Here, both are merging mechanisms but the difference between the Git Merge and Git Rebase is, in Git Merge logs will be showing the complete history of commits.
However, when one does Git Rebase, the logs are rearranged. The rearrangement is done to make the logs look linear and simple to understand. This is also a drawback since other team members will not understand how the different commits were merged into one another.
Git Pull | Git Fetch |
DevOps is used to update the working directory with the latest changes from the remote server. | Git fetch gets new data from a remote repository to the local repository |
Git pull is used to get the data to the local repository and the data is merged in the working repository | Git fetch is only used to get the data to the local repository but the data is not merged in the working repository |
Command – git pull origin | Command – git fetch origin |
Shift left is a concept used in DevOps for a better level of security, performance, etc. Let us get in detail with an example, if we see all the phases in DevOps we can say that security is tested before the step of deployment. By using the left shift method we can include the security in the development phase which is on the left.[will be shown in the diagram] not only in development we can integrate with all phases like before development and in the testing phase too. This probably increases the level of security by finding the errors in the very initial stages.
Now, let’s look at Continuous Integration interview questions:
I will advise you to begin this answer by giving a small definition of Continuous Integration (CI). It is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.
I suggest that you explain how you have implemented it in your previous job. You can refer the below given example:
In the diagram shown above:
For this answer, you should focus on the need of Continuous Integration. My suggestion would be to mention the below explanation in your answer:
Continuous Integration of Dev and Testing improves the quality of software, and reduces the time taken to deliver it, by replacing the traditional practice of testing after completing all development. It allows Dev team to easily detect and locate problems early because developers need to integrate code into a shared repository several times a day (more frequently). Each check-in is then automatically tested.
Here you have to mention the requirements for Continuous Integration. You could include the following points in your answer:
I will approach this task by copying the jobs directory from the old server to the new one. There are multiple ways to do that; I have mentioned them below:
You can:
Answer to this question is really direct. To create a backup, all you need to do is to periodically back up your JENKINS_HOME directory. This contains all of your build jobs configurations, your slave node configurations, and your build history. To create a back-up of your Jenkins setup, just copy this directory. You can also copy a job directory to clone or replicate a job or rename the directory.
My approach to this answer will be to first mention how to create Jenkins job. Go to Jenkins top page, select “New Job”, then choose “Build a free-style software project”.
Then you can tell the elements of this freestyle job:
Below, I have mentioned some important Plugins:
These Plugins, I feel are the most useful plugins. If you want to include any other Plugin that is not mentioned above, you can add them as well. But, make sure you first mention the above stated plugins and then add your own.
The way I secure Jenkins is mentioned below. If you have any other way of doing it, please mention it in the comments section below:
Jenkins is one of the many popular tools that are used extensively in DevOps. Edureka’s DevOps Certification course will provide you hands-on training with Jenkins and high quality guidance from industry experts. Give it a look:
This is a continuous deployment strategy that is generally used to decrease downtime. This is used for transferring the traffic from one instance to another.
For Example, let us take a situation where we want to include a new version of code. Now we have to replace the old version with a new version of code. The old version is considered to be in a blue environment and the new version is considered as a green environment. we had made some changes to the existing old version which transformed into a new version with minimum changes.
Now to run the new version of the instance we need to transfer the traffic from the old instance to the new instance. That means we need to transfer the traffic from the blue environment to the green environment. The new version will be running on the green instance. Gradually the traffic is transferred to the green instance. The blue instance will be kept on idle and used for the rollback.
In Blue-Green deployment, the application is not deployed in the same environment. Instead, a new server or environment is created where the new version of the application is deployed.
Once the new version of the application is deployed in a separate environment, the traffic to the old version of the application is redirected to the new version of the application.
We follow the Blue-Green Deployment model, so that any problem which is encountered in the production environment for the new application if detected. The traffic can be immediately redirected to the previous Blue environment, with minimum or no impact on the business. Following diagram shows, Blue-Green Deployment.
A pattern can be defined as an ideology on how to solve a problem. Now anti-pattern can be defined as a method that would help us to solve the problem now but it may result in damaging our system [i.e, it shows how not to approach a problem ].
Some of the anti-patterns we see in DevOps are:
First, if we want to approach a project that needs DevOps, we need to know a few concepts like :
Now let’s move on to the Continuous Testing questions.
I will advise you to follow the below mentioned explanation:
Continuous Testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with in the latest build. In this way, each build is tested continuously, allowing Development teams to get fast feedback so that they can prevent those problems from progressing to the next stage of Software delivery life-cycle. This dramatically speeds up a developer’s workflow as there’s no need to manually rebuild the project and re-run all tests after making changes.
Automation testing or Test Automation is a process of automating the manual process to test the application/system under test. Automation testing involves use of separate testing tools which lets you create test scripts which can be executed repeatedly and doesn’t require any manual intervention.
I have listed down some advantages of automation testing. Include these in your answer and you can add your own experience of how Continuous Testing helped your previous company:
I have mentioned a generic flow below which you can refer to:
In DevOps, developers are required to commit all the changes made in the source code to a shared repository. Continuous Integration tools like Jenkins will pull the code from this shared repository every time a change is made in the code and deploy it for Continuous Testing that is done by tools like Selenium as shown in the below diagram.
In this way, any change in the code is continuously tested unlike the traditional approach.
You can answer this question by saying, “Continuous Testing allows any change made in the code to be tested immediately. This avoids the problems created by having “big-bang” testing left to the end of the cycle such as release delays and quality issues. In this way, Continuous Testing facilitates more frequent and good quality releases.”
Key elements of Continuous Testing are:
Here mention the testing tool that you have worked with and accordingly frame your answer. I have mentioned an example below:
I have worked on Selenium to ensure high quality and more frequent releases.
Some advantages of Selenium are:
Selenium supports two types of testing:
Regression Testing: It is the act of retesting a product around an area where a bug was fixed.
Functional Testing: It refers to the testing of software features (functional points) individually.
My suggestion is to start this answer by defining Selenium IDE. It is an integrated development environment for Selenium scripts. It is implemented as a Firefox extension, and allows you to record, edit, and debug tests. Selenium IDE includes the entire Selenium Core, allowing you to easily and quickly record and play back tests in the actual environment that they will run in.
Now include some advantages in your answer. With autocomplete support and the ability to move commands around quickly, Selenium IDE is the ideal environment for creating Selenium tests no matter what style of tests you prefer.
I have mentioned differences between Assert and Verify commands below:
The following syntax can be used to launch Browser:
WebDriver driver = new FirefoxDriver();
WebDriver driver = new ChromeDriver();
WebDriver driver = new InternetExplorerDriver();
For this answer, my suggestion would be to give a small definition of Selenium Grid. It can be used to execute same or different test scripts on multiple platforms and browsers concurrently to achieve distributed test execution. This allows testing under different environments and saving execution time remarkably.
Learn Automation testing and other DevOps concepts in live instructor-led online classes in our DevOps Certification course.
Continuous Testing | Automation Testing |
Continuous Testing is a process that involves executing all the automated test cases as a part of the software delivery pipeline | Automation testing is a process tool that involves testing the code repetitively without manual intervention. |
This process mainly focuses on business risks. | This process mainly focuses on a bug-free environment. |
It’s comparatively slow than automation testing | It’s comparatively faster than continuous testing. |
Now let’s check how much you know about Configuration Management.
The purpose of Configuration Management (CM) is to ensure the integrity of a product or system throughout its life-cycle by making the development or deployment process controllable and repeatable, therefore creating a higher quality product or system. The CM process allows orderly management of system information and system changes for purposes such as to:
Given below are few differences between Asset Management and Configuration Management:
According to me, you should first explain Asset. It has a financial value along with a depreciation rate attached to it. IT assets are just a sub-set of it. Anything and everything that has a cost and the organization uses it for its asset value calculation and related benefits in tax calculation falls under Asset Management, and such item is called an asset.
Configuration Item on the other hand may or may not have financial values assigned to it. It will not have any depreciation linked to it. Thus, its life would not be dependent on its financial value but will depend on the time till that item becomes obsolete for the organization.
Now you can give an example that can showcase the similarity and differences between both:
1) Similarity:
Server – It is both an asset as well as a CI.
2) Difference:
Building – It is an asset but not a CI.
Document – It is a CI but not an asset
Infrastructure as Code (IAC) is a type of IT infrastructure that operations teams can use to automatically manage and provision through code, rather than using a manual process.
Companies for faster deployments treat infrastructure like software: as code that can be managed with the DevOps tools and processes. These tools let you make infrastructure changes more easily, rapidly, safely and reliably.
This depends on the organization’s need so mention few points on all those tools:
Puppet is the oldest and most mature CM tool. Puppet is a Ruby-based Configuration Management tool, but while it has some free features, much of what makes Puppet great is only available in the paid version. Organizations that don’t need a lot of extras will find Puppet useful, but those needing more customization will probably need to upgrade to the paid version.
Chef is written in Ruby, so it can be customized by those who know the language. It also includes free features, plus it can be upgraded from open source to enterprise-level if necessary. On top of that, it’s a very flexible product.
Ansible is a very secure option since it uses Secure Shell. It’s a simple tool to use, but it does offer a number of other services in addition to configuration management. It’s very easy to learn, so it’s perfect for those who don’t have a dedicated IT staff but still need a configuration management tool.
SaltStack is python based open source CM tool made for larger businesses, but its learning curve is fairly low.
I will advise you to first give a small definition of Puppet. It is a Configuration Management tool which is used to automate administration tasks.
Now you should describe its architecture and how Puppet manages its Agents. Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave as shown on the diagram below. Puppet Slave sends request to Puppet Master and Puppet Master then pushes configuration on Slave.
Refer the diagram below that explains the above description.
The easiest way is to enable auto-signing in puppet.conf.
Do mention that this is a security risk. If you still want to do this:
For this answer, I will suggest you to explain you past experience with Puppet. you can refer the below example:
I automated the configuration and deployment of Linux and Windows machines using Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the roles and profiles pattern and documented the purpose of each module in README to ensure that others could update the module using Git. The modules I wrote are still being used, but they’ve been improved by my teammates and members of the community
Over here, you need to mention the tools and how you have used those tools to make Puppet more powerful. Below is one example for your reference:
Changes and requests are ticketed through Jira and we manage requests through an internal process. Then, we use Git and Puppet’s Code Manager app to manage Puppet code in accordance with best practices. Additionally, we run all of our Puppet changes through our continuous integration pipeline in Jenkins using the beaker testing framework.
It is a very important question so make sure you go in a correct flow. According to me, you should first define Manifests. Every node (or Puppet Agent) has got its configuration details in Puppet Master, written in the native Puppet language. These details are written in the language which Puppet can understand and are termed as Manifests. They are composed of Puppet code and their filenames use the .pp extension.
Now give an exampl. You can write a manifest in Puppet Master that creates a file and installs apache on all Puppet Agents (Slaves) connected to the Puppet Master.
For this answer, you can go with the below mentioned explanation:
A Puppet Module is a collection of Manifests and data (such as facts, files, and templates), and they have a specific directory structure. Modules are useful for organizing your Puppet code, because they allow you to split your code into multiple Manifests. It is considered best practice to use Modules to organize almost all of your Puppet Manifests.
Puppet programs are called Manifests which are composed of Puppet code and their file names use the .pp extension.
You are expected to answer what exactly Facter does in Puppet so according to me, you should say, “Facter gathers basic information (facts) about Puppet Agent such as hardware details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and more. These facts are then made available in Puppet Master’s Manifests as variables.”
Begin this answer by defining Chef. It is a powerful automation platform that transforms infrastructure into code. Chef is a tool for which you write scripts that are used to automate processes. What processes? Pretty much anything related to IT.
Now you can explain the architecture of Chef, it consists of:
My suggestion is to first define Resource. A Resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.
You should explain about the functions of Resource for that include the following points:
For this answer, I will suggest you to use the above mentioned flow: first define Recipe. A Recipe is a collection of Resources that describes a particular configuration or policy. A Recipe describes everything that is required to configure part of a system.
After the definition, explain the functions of Recipes by including the following points:
The answer to this is pretty direct. You can simply say, “a Recipe is a collection of Resources, and primarily configures a software package or some piece of infrastructure. A Cookbook groups together Recipes and other information in a way that is more manageable than having just Recipes alone.”
My suggestion is to first give a direct answer: when you don’t specify a resource’s action, Chef applies the default action.
Now explain this with an example, the below resource:
file ‘C:UsersAdministratorchef-reposettings.ini’ do
content ‘greeting=hello world’
end
is same as the below resource:
file ‘C:UsersAdministratorchef-reposettings.ini’ do
action :create
content ‘greeting=hello world’
end
because: create is the file Resource’s default action.
Modules are considered to be the units of work in Ansible. Each module is mostly standalone and can be written in a standard scripting language such as Python, Perl, Ruby, bash, etc.. One of the guiding properties of modules is idempotency, which means that even if an operation is repeated multiple times e.g. upon recovery from an outage, it will always place the system into the same state.
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Playbooks are designed to be human-readable and are developed in a basic text language.
At a basic level, playbooks can be used to manage configurations of and deployments to remote machines.
Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as an ad-hoc action:
Ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host.
WebLogic Server 8.1 allows you to select the load order for applications. See the Application MBean Load Order attribute in Application. WebLogic Server deploys server-level resources (first JDBC and then JMS) before deploying applications. Applications are deployed in this order: connectors, then EJBs, then Web Applications. If the application is an EAR, the individual components are loaded in the order in which they are declared in the application.xml deployment descriptor.
Yes, you can use weblogic.Deployer to specify a component and target a server, using the following syntax:
java weblogic.Deployer -adminurl http://admin:7001 -name appname -targets server1,server2 -deploy jsps/*.jsp
The auto-deployment feature checks the applications folder every three seconds to determine whether there are any new applications or any changes to existing applications and then dynamically deploys these changes.
The auto-deployment feature is enabled for servers that run in development mode. To disable auto-deployment feature, use one of the following methods to place servers in production mode:
Set -external_stage using weblogic.Deployer if you want to stage the application yourself, and prefer to copy it to its target by your own means.
Ansible and Puppet are two of the most popular configuration management tools among DevOps engineers. Learn them and more in our DevOps Masters Program designed by industry experts to certify you as a DevOps Engineer!
Generally, SSH is used for connecting two computers and helps to work on them remotely. SSH is mostly used by the operations team as the operations team will be dealing with managing tasks with which they will require the admin system remotely. The developers will also be using SSH but comparatively less than the operations team as most of the time they will be working in the local systems. As we know, the DevOps development team and operation team will collaborate and work together.SSH will be used when the operations team faces any problem and needs some assistance from the development team then SSH is used.
Memcached is a Free & open-source, high-performance, distributed memory object caching system.
This is generally used in the management of memory in dynamic web applications by caching the data in RAM. This helps to reduce the frequency of fetching from external sources. This also helps in speeding up the dynamic web applications by alleviating database load.
Conclusion: DevOps is a culture of collaboration between the Development team and operation team to work together to bring out an efficient and fast software product. So these are a few top DevOps interview questions that are covered in this blog. This blog will be helpful to prepare for a DevOps interview.
By the name, we can say it is a type of meeting which is conducted at the end of the project. In this meeting, all the teams come together and discuss the failures in the current project. Finally, they will conclude how to avoid them and what measures need to be taken in the future to avoid these failures.
In DevOps, CAMS stands for Culture, Automation, Measurement, and Sharing.
Let’s test your knowledge on Continuous Monitoring.
I will suggest you to go with the below mentioned flow:
Continuous Monitoring allows timely identification of problems or weaknesses and quick corrective action that helps reduce expenses of an organization. Continuous monitoring provides solution that addresses three operational disciplines known as:
You can answer this question by first mentioning that Nagios is one of the monitoring tools. It is used for Continuous monitoring of systems, applications, services, and business processes etc in a DevOps culture. In the event of a failure, Nagios can alert technical staff of the problem, allowing them to begin remediation processes before outages affect business processes, end-users, or customers. With Nagios, you don’t have to explain why an unseen infrastructure outage affect your organization’s bottom line.
Now once you have defined what is Nagios, you can mention the various things that you can achieve using Nagios.
By using Nagios you can:
This completes the answer to this question. Further details like advantages etc. can be added as per the direction where the discussion is headed.
I will advise you to follow the below explanation for this answer:
Nagios runs on a server, usually as a daemon or service. Nagios periodically runs plugins residing on the same server, they contact hosts or servers on your network or on the internet. One can view the status information using the web interface. You can also receive email or SMS notifications if something happens.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change.
Now expect a few questions on Nagios components like Plugins, NRPE etc..
Begin this answer by defining Plugins. They are scripts (Perl scripts, Shell scripts, etc.) that can run from a command line to check the status of a host or service. Nagios uses the results from Plugins to determine the current status of hosts and services on your network.
Once you have defined Plugins, explain why we need Plugins. Nagios will execute a Plugin whenever there is a need to check the status of a host or service. Plugin will perform the check and then simply returns the result to Nagios. Nagios will process the results that it receives from the Plugin and take the necessary actions.
For this answer, give a brief definition of Plugins. The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor “local” resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines.
I will advise you to explain the NRPE architecture on the basis of diagram shown below. The NRPE addon consists of two pieces:
There is a SSL (Secure Socket Layer) connection between monitoring host and remote host as shown in the diagram below.
According to me, the answer should start by explaining Passive checks. They are initiated and performed by external applications/processes and the Passive check results are submitted to Nagios for processing.
Then explain the need for passive checks. They are useful for monitoring services that are Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis. They can also be used for monitoring services that are Located behind a firewall and cannot be checked actively from the monitoring host.
Make sure that you stick to the question during your explanation so I will advise you to follow the below mentioned flow. Nagios check for external commands under the following conditions:
For this answer, first point out the basic difference Active and Passive checks. The major difference between Active and Passive checks is that Active checks are initiated and performed by Nagios, while passive checks are performed by external applications.
If your interviewer is looking unconvinced with the above explanation then you can also mention some key features of both Active and Passive checks:
Passive checks are useful for monitoring services that are:
The main features of Actives checks are as follows:
The interviewer will be expecting an answer related to the distributed architecture of Nagios. So, I suggest that you answer it in the below mentioned format:
With Nagios you can monitor your whole enterprise by using a distributed monitoring scheme in which local slave instances of Nagios perform monitoring tasks and report the results back to a single master. You manage all configuration, notification, and reporting from the master, while the slaves do all the work. This design takes advantage of Nagios’s ability to utilize passive checks i.e. external applications or processes that send results back to Nagios. In a distributed configuration, these external applications are other instances of Nagios.
First mention what this main configuration file contains and its function. The main configuration file contains a number of directives that affect how the Nagios daemon operates. This config file is read by both the Nagios daemon and the CGIs (It specifies the location of your main configuration file).
Now you can tell where it is present and how it is created. A sample main configuration file is created in the base directory of the Nagios distribution when you run the configure script. The default name of the main configuration file is nagios.cfg. It is usually placed in the etc/ subdirectory of you Nagios installation (i.e. /usr/local/nagios/etc/).
I will advise you to first explain Flapping first. Flapping occurs when a service or host changes state too frequently, this causes lot of problem and recovery notifications.
Once you have defined Flapping, explain how Nagios detects Flapping. Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. Nagios follows the below given procedure to do that:
A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold. A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold.
According to me the proper format for this answer should be:
First name the variables and then a small explanation of each of these variables:
Then give a brief explanation for each of these variables. Name is a placeholder that is used by other objects. Use defines the “parent” object whose properties should be used. Register can have a value of 0 (indicating its only a template) and 1 (an actual object). The register value is never inherited.
Answer to this question is pretty direct. I will answer this by saying, “One of the features of Nagios is object configuration format in that you can create object definitions that inherit properties from other object definitions and hence the name. This simplifies and clarifies relationships between various components.”
I will advise you to first give a small introduction on State Stalking. It is used for logging purposes. When Stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results.
Depending on the discussion between you and interviewer you can also add, “It can be very helpful in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked.”
Want to get trained in monitoring tools like Nagios? Want to certified as a DevOps Engineer? Make sure you check out our DevOps Engineer Course Masters Program.
Let’s see how much you know about containers and VMs.
My suggestion is to explain the need for containerization first, containers are used to provide consistent computing environment from a developer’s laptop to a test environment, from a staging environment into production.
Now give a definition of containers, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containerizing the application platform and its dependencies removes the differences in OS distributions and underlying infrastructure.
Below are the advantages of containerization over virtualization:
Given below are some differences. Make sure you include these differences in your answer:
I suggest that you go with the below mentioned flow:
Docker image is the source of Docker container. In other words, Docker images are used to create containers. Images are created with the build command, and they’ll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network.
Tip: Be aware of Dockerhub in order to answer questions on pre-available images.
This is a very important question so just make sure you don’t deviate from the topic. I advise you to follow the below mentioned format:
Docker containers include the application and all of its dependencies but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.
Now explain how to create a Docker container, Docker containers can be created by either creating a Docker image and then running it or you can use Docker images that are present on the Dockerhub.
Docker containers are basically runtime instances of Docker images.
Answer to this question is pretty direct. Docker hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.
According to me, below points should be there in your answer:
Docker containers are easy to deploy in a cloud. It can get more applications running on the same hardware than other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.
If you have some more points to add you can do that but make sure the above the above explanation is there in your answer.
You should start this answer by explaining Docker Swarn. It is native clustering for Docker which turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.
I will also suggest you to include some supported tools:
This answer according to me should begin by explaining the use of Dockerfile. Docker can build images automatically by reading the instructions from a Dockerfile.
Now I suggest you to give a small definition of Dockerfle. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.
Now expect a few questions to test your experience with Docker.
You can use json instead of yaml for your compose file, to use json file with compose, specify the filename to use for eg:
docker-compose -f docker-compose.json up
Explain how you have used Docker to help rapid deployment. Explain how you have scripted Docker and used Docker with other tools like Puppet, Chef or Jenkins. If you have no past practical experience in Docker and have past experience with other tools in similar space, be honest and explain the same. In this case, it makes sense if you can compare other tools to Docker in terms of functionality.
I will suggest you to give a direct answer to this. We can use Docker image to create Docker container by using the below command:
docker run -t -i <image name> <command name>
This command will create and start container.
You should also add, If you want to check the list of all running container with status on a host use the below command:
docker ps -a
In order to stop the Docker container you can use the below command:
docker stop <container ID>
Now to restart the Docker container you can use:
docker restart <container ID>
Large web deployments like Google and Twitter, and platform providers such as Heroku and dotCloud all run on container technology, at a scale of hundreds of thousands or even millions of containers running in parallel.
I will start this answer by saying Docker runs on only Linux and Cloud platforms and then I will mention the below vendors of Linux:
Cloud:
Note that Docker does not run on Windows or Mac.
You can answer this by saying, no I won’t lose my data when the Docker container exits. Any data that your application writes to disk gets preserved in its container until you explicitly delete the container. The file system for the container persists even after the container halts.
DevOps Pipeline can be defined as a set of tools and processes in which both the development team and operations team work together. In DevOps automation, CI/CD plays an important role. Now if we look into the flow of DevOps, First when we complete continuous integration. Then the next step towards continuous delivery is triggered. After the continuous delivery, the next step of continuous deployment will be triggered. The connection of all these functions can be defined as a pipeline.
Continuous Deployment | Continuous Delivery |
Deployment to production is automated | Deployment to production is manual |
App deployment to production can be done, as soon as the code passes all the tests. There is no release day, in continuous Deployment | App deployment to production is done manually on a certain day, called “release day” |
And, that’s it!
I hope these questions help you crack your DevOps interview. If you’re searching for a demanding and rewarding career. Whether you’ve worked in DevOps or are new to the field, the PG in DevOps is what you need to learn how to succeed. From the basic to the most advanced techniques, we cover everything.
All the best for your interview!
Original article source at https://www.edureka.co
#devops #interviewquestion
1649401164
SQL is a necessity for data scientist. This video looks at 3 categories of SQL questions. Knowing the categories of questions will ensure that you prepare fully and don’t overlook any skills while practicing.
- Computing Ratios
- Data Categorization
- Cumulative Sums
Timestamp
00:00 SQL Interview
01:02 SQL Fundamentals
02:15 Computing Ratios
06:07 Data Categorization
09:12 Cumulative Sums
11:39 Next Video - Window Functions
#sql #datascience #interviewquestion
1648872225
C# is a popular Microsoft-developed, general-purpose, OOPs language. In this article learn the top C# interview questions and answers to crack your next interview.
During the development of the .Net Framework, Anders Hejlsberg and his colleagues created C#. C# is intended for the CLI (Common Language Infrastructure), consisting of a runtime environment and an executable code that enables the use of a variety of high-level languages on a variety of computer platforms and architectures.
C# programming features:
Now, let us take a look at the top 20 C# interview questions that you might face!
Continue statement - Used in jumping over a particular iteration and getting into the next iteration of the loop.
Break statement - Used to skip the next statements of the current iteration and come out of the loop.
The different types of comments in C# are:
Example -
/// example of XML comment
Example -
// example of single-line comment
Example -
/* example of an
multiline comment */
Four steps of code compilation in C# include -
The various methods of passing parameters in a method include -
The C# access modifiers are -
The following are the advantages of C# -
The following IDEs’ are useful in C# development -
Below are the reasons why we use the C# language -
Some of the main features of C# are -
void - A method of a void type of value does not return anything.
Static - To declare a member of a specific type, we use the Static keyword.
Public - A member declared with the public keyword states that can be accessed by any other member of any other class publicly.
A class is an object's blueprint. It specifies the many types of data and functions that objects will have. A class allows you to combine variables of various events, types, and methods to construct your own custom types. A class in C# is defined via the class keyword.
In simple terms, managed code is code that is executed by the CLR (Common Language Runtime). This means that every application code is totally dependent on the .NET platform and is regarded as overseen in light of it. Code executed by a runtime programme that is not part of the .NET platform is considered unmanaged code. Memory, security, and other activities related to execution will be handled by the application's runtime.
It's a type of class whose objects can't be instantiated, and it's signified by the term 'abstract'. It consists of a methodology or a single approach.
Once the try and catch blocks have been completed, the finalize block is called since it is used for exception handling. No matter if the exception has been captured, this block of code is run. In general, the code in this block is cleaner.
Just before garbage collection, the finalize method is called. The main priorities of the finalize method are to clean up unmanaged code, which is automatically triggered whenever an instance is not re-called.
An object is a representation of a class that is used to access the class's methods. The "New" keyword is used to create an object, and a class that generates an object in memory has information about that class's methods, variables, and behavior.
To prevent a class from being inherited, we need to construct sealed classes. Sealable modifiers are used to do this. A compilation problem happens if we try to forcefully define a sealed class as a base class.
Creating numerous named and unique signatures containing methods with the same class is known as method overloading. Overload resolution is used by the compiler to identify which method will be invoked when you compile.
An interface is a class that does not have any implementation. Only the declarations of events, properties, and attributes are included.
A partial class effectively breaks a class's definition into various classes in the same or other source code files. A class definition can be written in numerous files, but it is compiled as a single class at runtime, and when a class is formed, all methods from all source files can be accessed using the same object. The keyword 'partial' denotes this.
Method overriding modifies derived class definition, which modifies the method behavior. Method overloading is the process of defining the same name method but with distinct signatures under the same class.
Original article source at https://www.simplilearn.com
#csharp #programming #interviewquestion
1646298510
Tune into this video, to learn about the best scalable architectural practices which you should be following as a full-stack developer when you're building your frontend and backend.
Drop a comment and let us know if you've watched this video till the end!
Timestamps
0:00 frontend and backend - best scalable architecture
9:06 Why are websockets unscalable (and how to scale them)
20:05 High scale Video processing pipeline architecture
36:15 Reverse proxy explained in 10 minutes
45:43Deploying a MERN Stack App On Production - Best Practices
#programming #fullstack #interviewquestion
1641956201
Preparing for coding interviews? Competitive programming? Learn to solve 10 common coding problems and improve your problem-solving skills.
⌨️ (0:00:00) Introduction
⌨️ (0:00:37) Valid anagram
⌨️ (0:05:10) First and last index in sorted array
⌨️ (0:13:44) Kth largest element
⌨️ (0:19:50) Symmetric tree
⌨️ (0:26:42) Generate parentheses
⌨️ (0:37:03) Gas station
⌨️ (0:50:06) Course schedule
⌨️ (1:06:50) Kth permutation
⌨️ (1:20:13) Minimum window substring
⌨️ (1:47:46) Largest rectangle in histogram
⌨️ (2:10:30) Conclusion
💻 Code: https://gist.github.com/syphh/173172ec9a4a1376e5096a187ecbddc9
#programming #developer #interviewquestion
1641884588
In this video, we will take a close look at a difficult data science interview question from Uber and walk you through a solution in Python. It turns out that after finding just a few simple, yet clever steps, this question becomes easier to solve.
Timeline:
#python #interviewquestion
1635742006
Here are five red flags you need to look for from your prospective employer when you go to a face to face interview:
1. Disorganization
2. Trash Talking
3. Inappropriate or Offensive Behavior
4. Bait-and-Switch
5. "Many Hats"
If you see an employer demonstrating one of the above behaviors - run!
#programming #interviewquestion #developer #jobs
1634543701
DevOps is the intersection point of software development, operations, and quality assurance (QA). If you are planning to start a career in this field, you must prepare the top DevOps interview questions that you might face in your job interview. In this article, we have listed 70+ most frequently asked DevOps interview questions and answers to boost your interview preparation.
1634093580
ASP.NET is an open-source web application framework developed by Microsoft and it is the subset of the .NET framework, the successor of the classic Active Server Pages(ASP). This is used to create web services and applications. Here, we have made a list of the top 50 ASP.NET interview questions along with their answers. The questions are from basic to advanced levels. So these will help you cr@ck the interview.
1628502540
In this video I’ll share the difference between string with small ‘s’ and String with ‘S’.
This question I have faced in many companies interview like - Microsoft, Deloitte, Tech M.
#csharp #interviewquestion #microsoftinterviewquestion