Best 5 Data Science Courses for Better Job Opportunities

Best 5 Data Science Courses for Better Job Opportunities

Data science is a rapidly growing field, and it has become a popular choice for many professionals looking for a lucrative and challenging career. With the increasing demand for data scientists, there is no better time to invest in learning this skill than now. However, with so many options available, it can be challenging to choose the right course. To help you out, we have compiled a list of the best data science courses in 2023 that can help you secure better job opportunities.

Data Science Specialization by Johns Hopkins University

This course is offered on Coursera and covers the fundamentals of data science, including statistical inference, data exploration, and data visualization. It consists of nine courses that can be completed in about ten months, and it includes hands-on projects to help you develop practical skills. This course has a rating of 4.6 out of 5, with over 500,000 enrolled learners.

Applied Data Science with Python Specialization by University of Michigan

This course is also offered on Coursera and is designed to teach you how to apply Python programming skills to real-world data analysis problems. It consists of five courses that can be completed in about six months, and it includes several case studies and projects to help you build a portfolio. This course has a rating of 4.5 out of 5, with over 100,000 enrolled learners.


Dataquest is an online learning platform that offers interactive courses in data science, machine learning, and data engineering. Their courses are project-based, which means you will be working on real-world problems from day one. They offer a free trial and have flexible pricing options, making it an affordable choice for learners. Dataquest has a rating of 4.8 out of 5, with over 10,000 learners.

HarvardX Data Science Program

HarvardX Data Science Program is a comprehensive program that covers the entire data science pipeline, from data collection and cleaning to machine learning and communication. It consists of eight courses that can be completed in about nine months, and it includes several case studies and projects to help you develop practical skills. This course has a rating of 4.5 out of 5, with over 10,000 enrolled learners.

IBM Data Science Professional Certificate

This course is offered on Coursera and is designed to teach you the essential skills needed to become a successful data scientist. It consists of nine courses that cover topics such as data analysis, data visualization, and machine learning. It includes hands-on projects that help you develop practical skills and build a portfolio. This course has a rating of 4.6 out of 5, with over 150,000 enrolled learners.

Check out our table below to see the top data science courses for 2023, with ratings, reviews, and durations:

Course TitleProviderRatingReviewsDuration
Data Science SpecializationCoursera4.6/530,000+10 months
Applied Data Science with PythonCoursera4.6/54,000+5 months
IBM Data Science Professional CertificateCoursera4.5/56,000+8 months
Introduction to Data ScienceedX4.6/51,000+4 weeks
Data Science BootcampSpringboard4.8/52500+6 months
Data Science and Machine Learning BootcampUdemy4.5/55,000+14 weeks
Machine Learning A-ZUdemy4.5/5200,000+42 hours

As you can see, there are a variety of courses available that cover different topics and have different durations. With so many options to choose from, it’s important to consider your goals and learning style when selecting a course. Be sure to read the course descriptions, reviews, and ratings carefully to find the best fit for you.

Wrap Up!

Taking a data science course can be an excellent investment in your future career, as it can help you build skills and knowledge that are highly valued by employers. Whether you’re just starting out in the field or looking to take your skills to the next level, there’s a course out there that can help you achieve your goals. So why wait? Start exploring your options today and take the first step towards a rewarding career in data science!

Original article source at:

#datascience #job #opportunities 

Best 5 Data Science Courses for Better Job Opportunities
Oral  Brekke

Oral Brekke


Popular Tips and Tricks for Your First Tech Job

Popular Tips and Tricks for Your First Tech Job

Starting a new job is daunting for anyone. Here's how to navigate the early days at your first tech job.

First days at work are scary. I still recall many instances where I lay awake at night before my first day at work, having an internal meltdown over what would happen the next day. Starting a new job is uncharted territory for most people. Even if you're a veteran in the industry, there's no denying that there can be a part of you that's a bit terrified of what is to come.

Understandably, a lot is happening. There are new people to meet, new projects and technologies to understand, documentation to read, tutorials to sit through, and endless HR presentations and paperwork to fill out. This can be overwhelming and, coupled with the considerable degree of uncertainty and unknowns you're dealing with, can be quite anxiety-inducing.

Two reasons motivated me to write about this subject. The first one being that back when I was a student, most of the discussion revolved around getting a job in tech, and no one talked about what happened next. How do you excel in your new role? Now that I look back, I think I assumed that the hard part is getting the job, and whatever comes after, I could probably figure out myself.

Similarly, once I started working in the industry, most of the career-related content I came across was about how to go from one senior level to another. No one really talked about what to do in the middle. What about the interns and the junior engineers? How do they navigate their early careers?

After completing three years of full-time professional experience as a software engineer (and a couple of internships before), I reflected on my time. I put together a list of tips and tricks I've employed while settling into a new tech role. I wanted to look beyond just the first couple of months and prioritize helping achieve long-term success.

Reflect on existing processes and documentation

Most new employees start by either having a ton of documentation thrown their way or none at all. Instead of being overwhelmed by either of these possibilities, you could view this as an opportunity.

Identify gaps in existing documentation and think about how you could improve it for the next engineer that gets onboarded. This not only shows initiative on your part but also demonstrates that you're committed to improving existing processes within your team.

I've seen both ends of the spectrum. I've been on teams with no documentation whatsoever. I've also been on teams that were very diligent with keeping their documentation up to date. Your path is pretty straightforward with the former, and you can work on creating that missing documentation. With the latter, you can always think of ways to improve what already exists. Sometimes, too much documentation in written form can also feel intimidating, especially for new employees. Some things might be better explained through other mediums, like video tutorials or screencasts.

Ask questions

I encourage you to look into whether a buddy will be assigned to you when you're starting. This is a fairly common practice at companies. The purpose of a buddy is to help you as you are onboarded. I've found this incredibly helpful because it gives you someone to direct all your questions, and you don't have to run around trying to find the right person/team.

While asking questions should always be encouraged, it is also necessary to do your homework before you ask those questions, including:

  • Do your research. This encompasses doing a web search, checking forums, and reading existing documentation. Use all the available tools at your disposal. However, it is essential to timebox yourself. You must balance doing your due diligence and keeping project deadlines and deliverables in mind.
  • Talk it out. As someone whose first language isn't English, I recommend talking things out loud before asking questions. In my experience, I've often found that, especially when I'm struggling with something difficult, I think in one language (probably my native language) and must explain it in another. This can be a bit challenging sometimes because doing that translation might not be straightforward.
  • Organize your thoughts. When struggling with something, it's very common to have many scrambled ideas that make sense to us but might not necessarily make sense to another person. I suggest sitting down, gathering your thoughts, writing them down, and talking through them out loud. This practice ensures that when you're explaining your thought process, it flows as intended, and the listener can follow your train of thought.

This approach is called the rubber duck technique, a common practice developers use while debugging. The concept is that sometimes explaining your problem to a third person can be very helpful in getting to the solution. This is also a testament to your excellent communication skills.

Respect people's time. Even if you're reaching out to someone like your buddy, be cognizant of the fact that they also have their day-to-day tasks to complete. Some things that I've tried out include the following:

  • Write down my questions and then set aside some time with my mentor so I could talk to them.
  • Compile questions instead of repeatedly asking for help so your mentor can get to them when they have time.
  • Schedule a quick 15-20 min video chat, especially if you want to share your screen, which is a great way to showcase your findings.

I think these approaches are better because you get someone's undivided attention instead of bothering them every couple of minutes when their attention might be elsewhere.

Deep dive into your projects

Even on teams with excellent documentation, starting your technical projects can be very daunting since multiple components are involved. Over time though, you will understand how your team does things. However, it can save you time and potential headaches to figure this out early on by keeping a handy list to refer to, including basic project setup, testing requirements, review and deployment processes, task tracking, and documentation.

If there's no documentation for the project you're starting on (a situation I have been in), see if you can identify the current or previous project owner and understand the basic project structure. This includes setting it up, deploying it, etc.

  • Identify your team's preference in the IDE (integrated development environment). You're free to use the IDE of your choice, but using the same one as your team can help, especially when debugging, since the choice of IDE impacts debugging. Different IDEs offer varying degrees of debugging support.
  • Understand how to do debugging, and I don't just mean using print statements (not that there's anything wrong with that approach). Leverage your team's experience here!
  • Understand testing requirements. This might depend on the scope of your project and general team practices, but the earlier you figure this out, the more confident you'll be in the changes you push to production.
  • Visualize the deployment process. This process can vary by team, company, etc. Regardless of how informal or formal it may be, make sure you understand how your changes get deployed to production, what the deployment pipeline looks like, how to deploy changes safely, what to do in case of failed builds, how to rollback faulty changes, and how to test your changes in production.
  • Understand the ticketing process. Understand how to document tickets and the level of detail expected. You will see a lot of variation here. Some companies expected us to submit our tickets daily, showing our progress. Other companies might not require that level of documentation.

Given everything I just mentioned, a beneficial, all-in-one exercise you can do in the first couple of weeks is to shadow another engineer and do peer coding sessions. This allows you to observe the entire process, end to end, from the moment a ticket is assigned to an engineer to when it gets deployed to production.

The first couple weeks can also feel frustrating if you're not yet given an opportunity to get your hands dirty. To counter this, ask your manager to assign some starter tickets to you. These are usually minor tasks like code cleanup or adding unit tests. Still, they allow you to tinker with the codebase, which helps improve your understanding and gives you a sense of accomplishment, which is a very encouraging feeling in the early days of a new job.

Speak up, especially when you're stuck

I want to stress the importance of communication when you're stuck. This happens, especially in the early months of a new job, and as frustrating as it can be, this is where your communication skills will shine.

  • Be transparent about blockers and your progress. Even if it's something as trivial as permission issues (a fairly common blocker for new employees), ensure that your manager is aware.
  • Don't wait until the last day to report if something will be delayed. Delays in your project push many other things forward. Share necessary project delays well in advance, so your manager can share this with stakeholders.
  • Don't forget things like thoroughly testing your changes or documenting your code just because you're in a rush.

Gain technical context

Gaining technical context is something I've personally struggled with, and I've actively worked on changing my approach in this area.

When I started as an intern, I would go in with a very focused mindset regarding what I wanted to learn. I'd have a laser-sharp focus on my project, but I'd completely turn a blind eye to everything else. Over the years, I realized that turning a blind eye to other or adjacent projects might not be the wisest decision.

First and foremost, it impacts your understanding of your work. I was naive to think I could be a good engineer if I focused exclusively on my project. That's just not true. You should take the time to understand other services with which your project might interact. You don't need to get into the nitty gritty, but developing a basic understanding goes a long way.

A common experience that new employees undergo is disconnecting from the rest of the company, which is a very natural feeling, especially at larger companies. I'm someone who develops a sense of exclusion very quickly, so when I moved to Yelp, a significantly larger company than my previous one, with projects of a much larger scale, I prioritized understanding the big picture. Not only did I work on developing an understanding of my project but also of other adjacent projects.

In my first few weeks at Yelp, I sat down with various engineers on my team and asked them to give me a bird's eye view of what I would be doing and the project's overarching goal. This approach was incredibly helpful because not only did I get varying degrees of explanations based on how senior the engineer was and how long they had been working on the project, but it also deepened my understanding of what I would be working on. I went into these meetings with the goal that my knowledge of the project should allow me to explain what I do to a stranger on the street. To this end, I asked my tech lead to clarify at what point my work came into the picture when a user opened the Yelp app and searched for something.

Architecture diagrams can also help in this scenario, especially when understanding how different services interact.

Establish expectations

For the longest time, I thought that all I needed to do was my best and be a good employee. If I was doing work, meeting goals, and no one complained, that should be good enough, right? Wrong.

You must be strategic with your career. You can't just outsource it to people's goodwill and hope you'll get the desired results just because you're meeting expectations.

  • Establish clear criteria the moment you start your new job. This varies by company, as some organizations have very well-defined measures while others might barely have any. If it's the latter, I suggest you sit down with your manager within the first couple of weeks and establish and unanimously agree on a criterion.
  • Make sure you thoroughly understand how you will be evaluated and what measures are used.

I remember walking out of my first evaluation very confused in my first full-time role. The whole conversation had been very vague and hand-wavy, and I had no clarity about my strengths, weaknesses, or even steps to improve.

At first, it was easy to attribute everything to my manager because the new employee in me thought this was their job, not mine. But over time, I realized that I couldn't just take a backseat as far as my performance evaluations were concerned. You can't just do good work and expect it to be enough. You have to actively take part in these conversations. You have to make sure that your effort and contributions are being noticed. From regularly contributing to technical design conversations to setting up socials for your team, ensure that your work is acknowledged.

Tying into establishing expectations is also the importance of actively seeking feedback. Don't wait until your formal performance evaluations every three or four months to find out how you're doing. Actively set up a feedback loop with your manager. Try to have regular conversations where you're seeking feedback, as scary as that may be.

Navigate working in distributed teams

The workplace has evolved over the past two years, and working in remote and distributed teams is now the norm instead of a rarity. I've listed some tips to help you navigate working in distributed teams:

  • Establish core hours and set these on your calendar. These are a set of hours that your team will unanimously agree upon, and the understanding is that everyone should be online and responsive during these hours. This is also convenient because meetings only get scheduled within these hours, making it much easier to plan your day.
  • Be mindful of people's time zones and lunch hours.
  • In the virtual world, you need to make a greater effort to maintain social interactions, and little gestures can go a long way in helping make the work environment much friendlier. These include the following:
    • When starting meetings, exchange pleasantries and ask people how their weekend/day has been. This helps break the ice and enables you to build a more personal connection with your team members, which goes beyond work.
    • Suggest an informal virtual gathering periodically for some casual chit-chat with the team.

Maintain a work-life balance

At the beginning of your career, it's easy to think that it's all about putting in those hours, especially given the 'hustle culture' narrative that we're fed 24/7 and the idea that a work-life balance is established in the later stages of our careers. This idea couldn't be further from the truth because a work-life balance isn't just magically going to occur for you. You need to actively and very diligently work on it.

The scary thing about not having a work-life balance is that it slowly creeps up on you. It starts with you checking emails after hours and then slowly makes its way to you, working over weekends and feeling perpetually exhausted.

[ Related read How I recognize and prevent burnout in open source ]

I've listed some tips to help you avoid this situation:

  • Turn off/pause notifications and emails and set yourself to offline.
  • Do not work weekends. It starts with you working one weekend, and the next thing you know, you're working most weekends. Whatever it is, it can wait until Monday.
  • If you're an on-call engineer, understand your company's policies surrounding that. Some companies offer monetary compensation, while others may give time off in lieu. Use this time. Not using your benefits like PTO (paid time off) and wellness days really shortens your longevity at work.

Wrap up

There's no doubt that starting a new job is stressful and difficult. I hope that these tips and tricks will make your first few months easier and set you up for great success with your new position. Remember to communicate, establish your career goals, take initiative, and use the company's tools effectively. I know you'll do great!

Original article source at:

#job #tips #tricks #careers #scale 

Popular Tips and Tricks for Your First Tech Job
Rupert  Beatty

Rupert Beatty


Best 6 Job Interview Tips for Developers

Job interview tips for developers intro

The job interview is something challenging for many people because you have to go to the meeting with your future employer personally in most cases. Besides that, you never know what kind of questions you can hear and what answer will be the best.

Although the technical part is important and depends on your hard skills, sometimes it happens that even an experienced developer may not pass the interview because of the soft skills and lack of preparation for the other part of the interview.

But no worries, it’s possible to prepare yourself for the questions from the non-technical part of the interview.

There are some common ones, which are asked by many people who decide if they want to hire you or not. Those questions are mostly about you, your personality, strengths, weaknesses, and behavior in different difficult situations.

In this article, I’d like to cover the most common questions and answers for the interview’s non-technical part as a developer. Besides that, I’ll try to help you find a way to decrease your nervousness, as it may bring a lot of damage to the result of your interview.

Let’s start!

Job interview tell me about yourself?

Tell me about yourself is actually one of the first questions asked during the non-technical job interview. And it’s kind of natural because your interlocutor doesn’t know much about you, besides what’s written in your resume, so from my point of view, it’s understandable.

This question may also sound a little bit different, like walk me through your experience, and it’s still all about the same.

But what’s the best answer to this question? What kind of information should you highlight and about what it’s better not to talk about?

Obviously, the question is about your career, and you need to answer it from your career point of view and keep your personal details to yourself. There is a simple formula that you can use to answer this question, and it’s about structuring your answer in the present, past, and future.

First, talk about the present time, tell about your current role or what you are currently doing in your professional life, tell about your responsibilities and skills you are using in the current position, and you can also mention your recent accomplishments.

Next, go to the past. In this part of the answer, tell the interviewer how you got to the current position and describe your previous experience relevant to the job you are currently applying for.

And the last part is the future, where you should describe your future goals, what you’d like to achieve in the future, and how the position you are applying for is useful. You can also mention why you are a good fit for the position you are applying for.

Job interview tell me about yourself

When preparing the answer to this question,remember it’s not the only way you can create it.

It’s worth remembering that you need to make the answers suitable to the position you are applying for and keep it professional but positive.

It’s a great idea to practice before the interview a few different ways you’d like to describe yourself, to feel confident enough on the interview day.

Here, I’ve got for you an example answer for the software engineer position.

I’m a software engineer with about 5 years of experience, focusing on Javascript and frontend development using modern frontend frameworks like React.js, Angular, and VueJS.

Currently, I’m working at XYZ, where I’m a regular frontend developer, working on one of the most popular world e-commerce platforms, using technologies like Typescript and Angular 9. The team I’m part of is responsible for improving user experience, which means I need to work closely with UX designers and pay a lot of attention to the platform’s user journey.

In the past 5 years, I worked in companies startups and software houses, where I was able to get a wide knowledge about the newest technologies used in front-end development and learn how to solve different problems, which are a big part of programmer work. I think this makes me a great fit for your company as well.

Although I’m quite happy with my current role, I feel that it’s time to go into a new career path and develop my back-end skills, in the beginning, using the programming language I already know. That’s why I applied for this Node.js developer position in your company, and it makes me exciting that I can use my new skills and create awesome things.

Job interview strengths and weaknesses?

The other popular topic covered during non-technical interviews is a question about your strengths and weaknesses. Your future employers would like to know more about you, and what good and bad sides you can see in yourself.

When you hear the question about your strengths and weaknesses, it’s really not a reason to feel stressed about answering this question. As with the previous question, it’s important to tell the truth, and prepare for this kind of question and think of some weaknesses and strengths that may seem right to tell about them during the interview.

Job interview strengths

Many people thinking about the strengths feel a little bit wired and try to downgrade their skills and positive sides, not to seem too bragging.

To make sure you separate good strengths, think about your personal skills and personality traits that can be useful in your work or following the company mission or vision.

The good examples of strengths you could mention as a software engineer are:

creativity - which helps to think outside fo the box and find a good solution in every situation;

teamwork - as a good team member, you can work with the team, help others, and discuss different solutions;

patience - you have enough patient to dig deep enough to find a solution for the problem you face, even if it doesn’t come in a second;

enthusiasm and flexibility - you are positive to changes that you focus on your work, and you are happy to try different technologies and tools available;

Those are just examples, so you know in which direction you should look to find your strengths. Remember that it’s a good idea to say a few words about how it helps you in your current or previous job.

For job interview what is weakness?

Weaknesses are also a hard topic for many people. They are scared not to show their incompetence or any bad personal traits, which can cross out your chances for the job.

And in this case, it’s also a good idea to plan the answer as well. Let me show you some example of weaknesses that you could talk about during the interview:

focusing on technology too much - it happens to me that I spend too much time focusing on selecting the technology, which sometimes is not so essential for the project or the company;

I have troubles to say no - very often I’m taking tasks of my colleagues, as it’s difficult for me to say „no” to others, even if I don’t have time to do it until the deadline;

difficulty in asking for help - I feel bad asking others for help, as it seems to me that I show that I’m not competent enough to do my job;

could have more experience in communication - I could get more experience in clear communication within the team and people skills overall;

When talking about the weaknesses, it’s good to mention that you need to improve or you already took action to improve any of them.

I hope the above examples will help you to define your strengths and weaknesses. The most important is to think about it before the interview, especially when it’s one of the most common questions during the interview, and you can actually predict it.

Job interview what motivates you?

From the manager’s point of view, it’s very important to know what motivates you to do your job and you can. When they have the information, they can know if the company can motivate you and if you are a good fit for the company.

For this question, you can think of the answers like:

an interesting project that allows you to show your expertise and create something awesome what will be used to improve peoples life or at least companies functionality;

teamwork and feeling of the common goal with the team, as when you feel that all of you are working on something, it makes you work faster, and better to be as efficient as others in the team and you don’t want to fail others;

being appreciated by teammates and managers, and feeling that I’m an important part of the company, and my works really mean something;

developing my skills allows me to feel that I’m growing and not sitting in one place even though I’m still in one company, I like to have different tasks, so I’m not stuck on one action for the next few years;

Again, the above answers are just examples that aim to show you how to define your motivation and make it a good answer for the What motivates you interview question.

Job interview questions to ask?

During every interview, the interviewer gives you a few minutes to ask questions about the position, company, etc. And it’s really worth using this time to find out anything that may be interesting for you and show your interest in the company.

Avoiding asking questions may seem to an interviewer that you are not too excited about the position, and you just don’t care.

I know that many people may just not know what they should ask about, that’s why I’ve got a small list of questions, which can help you.

  • What are the professional development opportunities in the company?
  • What are the daily responsibilities of the position?
  • What do you like about working in the company?
  • How big is the team you will work with, and how it’s managed?
  • What is the software you will be using as an employee?
  • With who I’ll be working the closest?
  • What’s the most challenging in working in this company?
  • What’s considered a success in the company in this position?
  • Is there a chance to get promoted to this position?
  • What do you expect from me as an employee?

I hope those 10 questions will give you an idea of what questions you can ask the interviewer.

Job interview behavioral questions

Another part of a non-technical interview is behavioral questions. Those are the questions that will give the interviewer an overview of how you behave in difficult situations.

As behavioral questions, you can expect questions about: - conflict with a team member and how you solved it; - your biggest failure and what you’ve learned from it; - your biggest challenge/difficult situation you had recently, and how did you solve it; - your way to solving problems; - your leadership skills;

For that kind of question, it’s good to take some time before the planned interview and think of all the difficult situations, challenges, or conflicts during your career and prepare the answers that can show that you can approach difficult times and solve them a proper way.

Job interview nervous

As I already mentioned, job interviews can be really stressful situations. Even if you are sure you did all you could to prepare for that meeting, it still can make you nervous. So, how to deal with the stress before the interview to fail.

  1. Go for a walk. The best would be to select a calm park where you can relax.
  2. Turn on the positive and motivating playlist, which will bring a good mood and let you start thinking positively.
  3. Meet your best friend or any other person to make you feel good, so you won’t focus on the upcoming meeting too much.
  4. Make a short meditation session.
  5. Do some sports in the morning, because training will give you positive energy.
  6. Smile and take a confident pose because your body can affect your mind, so if you take a happy, confident posture, your body language will send some positive vibes to the interviewer.

I hope those tips will help you to manage the stress before the interview because it’s essential to turn your body and your mind into a positive site!

Job interview tips for developers summary

It’s time to summarize! In this article, I went through the most popular, non-technical interview questions that almost for sure you’ll be asked. I gave you some examples of good answers, although I remember that it’s very important to create your own honest answers.

Also, saying about weaknesses or failures is nothing you should avoid, because we are all humans and it’s more then sure, all of us had some failures, so you will just seem to lie if you would try to convince someone that you don’t have weakness or any failure during your career.

But on the other hand, don’t hesitate to talk about your positive sides and successes, as those are the same important.

I hope those tips will help you to prepare for the next interview. Besides that, remember about the good mood and positive vibes, because it’s really important to be liked by an interviewer.

So, fingers crossed on your next interview!

Those who prefer to watch the video instead of reading, join me at our Youtube channel, where I’ve gone a video for you about the same topic.

Original article source at:

#interview #developers #tips #job 

Best 6 Job Interview Tips for Developers
Arpit Soni

Arpit Soni


15 Coding Project Ideas That Help You To Get A Job

15 Coding Project Ideas that Help You to Get a Job #coding #project #job

15 Coding Project Ideas That Help You To Get A Job
Tech2 etc

Tech2 etc


How do I start working as a freelancer?

To start freelancing while you already have a full-time job, you’ll have to consider the following steps:

How to start freelancing (even when working full-time)?

1. Define your business goals.
2. Find a perspective niche (and stick to it)
3. Identify target clients.
4. Set your freelance rates.
5. Create a website (and portfolio)
6. Find your first client.
7. Expand your network.
8. Balance your full-time job with your part-time freelancing side gigs.

Define your business goals

Before you start freelancing, you’ll have to be honest with yourself, and answer an important question:
* Is freelancing just a side gig? Or do you plan to expand it to a full-time business?

The answer to this question will determine your next steps, considering that you’ll either aim to balance your full-time and freelance work, OR aim to work your way out of your current job to pursue a full-time freelance career.

The answer to this question is your long-term goal. To pursue it, you’ll have to set a number of short-term goals and answer questions such as:

* What niche will you specialize in?
* What services will you offer?
* What amount do you want to be earning on a monthly basis to decide to quit your full-time job (if applicable)?

Find a perspective niche (and stick to it)

No matter whether you’re a graphic designer, copywriter, developer, or anything in between by vocation, it’d be best if you were to specialize in a particular area of work:

For example, If you’re a content writer, don’t aim to write about any topic under the sun, from Top 3 Ways to Prepare Your Garden for Spring to Taxation Laws in all 50 US States Explained.

Sure, you may start by writing various topics, to find your ideal niche, but eventually, you should pick one, and stick to it.

But, Cryptocurrency or Technology content writer always sound much better in your CV than General content writer. Moreover, they inspire more confidence in you on the part of the clients who’ll always be looking for specific, and not general content.

The same is true if you’re a graphic designer:
* consider your level of experience
* your current pool of connections
* your natural inclinations to a particular design niche

Then, make your pick — focus on delivering interface design for apps, creating new custom logos, devising layouts for books, or any other specific design work.

Identify target clients

Just like you shouldn’t aim to cover every niche in your industry, you shouldn’t aim to cater to the needs of the entire industry’s market.

Small businesses, teams, remote workers, or even other freelancers may all require the same type of service you’re looking to offer. But, you’ll need to target one or two types of clients especially.

Say you want to start a blog about everything related to working remotely. There are freelancers, teams, but also entire businesses working remotely, and they can serve as your starting point.

* Think about the age of your desired readers. Perhaps you’re a Millennial, so you can write a blog about working remotely for Millennials?
* Think about the location. Perhaps you want to cover predominantly the US market?
* Think about the education level. Perhaps you want to cover newly independent remote workers, who’re just starting out their careers?
* Think about income. Perhaps you’re looking to write for people with a limited budget, but who want to try digital nomadism?
* Think about gender. Perhaps you want to predominantly target women freelancers?

These are only some questions you should ask yourself, but they reveal a lot. For example, that you can write for fresh-out-of-college female Millennials from the US looking to start and cultivate a remote career while traveling abroad with a limited budget.

Set your freelance rates

Setting your freelance rates always seems like a challenging point, but it’s a lot more straightforward when you list the necessary parameters that help determine your ideal (and realistic) pricing:

* Experience (if any)
* Education level
* Supply and demand for your services
* The prices in your industry
* The average freelance hourly rates in your niche
* Your location

Once you have all this data, you’ll need to calculate your hourly rate based on it — higher education, experience, and demand for your niche will mean you can set higher prices. If you’re based in the US, you’ll likely be able to command higher rates than if you’re based in the Philippines. Of course, your living standards and expenses will be higher, so you’ll also need to command higher rates.

Create a website (and portfolio)

Once you’ve defined your business goals, found a niche, identified your target clients, and set your prices, you’ll want to create an online presence. And, the best way to do so is by creating your own website with a portfolio showcasing your previous work, skills, and expertise. There are plenty of amazing tutorials on YouTube.

Creating a website for free through a website builder like Wix is fine, but you’ll be better off if you were to buy a domain name from a hosting website. You’ll get a unique name for your online presence and a customized email address, so you’ll look much more credible and overall more professional to potential clients.

Regardless of what your industry is, it may be best if you were to choose your own name for the domain, especially when you’re mostly looking to showcase your portfolio. You’ll stand out better, and it’ll later be easier to switch to a different industry (or niche) if you find that you want to.

Once you’ve selected a host and domain name, you can install WordPress to your website, and choose the website’s theme. Then, you can add a landing page describing your services, and prices, maybe even a separate page for a blog where you’ll write about industry-related topics.

Find your first client

Your first client may contact you because of your personal website portfolio, but you should also actively pursue your first gig bearing in mind what employers look for. There are several ways you can do this:

* Get involved in your industry’s community
* Learn how to pitch through email
* Look through freelance job platforms/websites

Expand your network

Once you’ve landed your first client, you’ll need to work on finding recurring clients. Perhaps your first client will become a recurring one. And, perhaps the referral you’ve been given by said first client will inspire others to contact you and provide a steady stream of work.

In any case, it’s best that you expand your network — and here’s where the famous Pareto principle comes in handy. According to it, cultivating a good relationship with 20% of your clients will help you find 80% of new work through their referrals. Moreover, each new 20 referrals increase your chances of getting new projects by 80%.

To expand your network, you can:

* partake in industry webinars
* attend events
* join Facebook groups, pages and communities
* streamline your LinkedIn network
* send out invites to professionals in your field (or a field that often requires your services)

Work on additional skills

Apart from your core, industry-related freelance skills (i.e., your hard skills), you’ll need to work on some additional skills — your soft skills.

Soft skills are more personality-related: communicativeness and critical thinking are probably the most important traits to pursue, but, you’ll also need to be persistent, good at handling stress, an efficient scheduler, and skilled in time management.

The more you can skill up yourself, the more expensive you will become. Remember knowledge is priceless.

You’ll also need to be confident, to persuade your potential clients that you possess the skills and experience they’re looking for.


Entering the freelancing business may sound overwhelming and complicated, but it’s actually pretty straightforward, once you follow the right steps.

Take time and do what you find passionate about.


#freelance #freelancing #job #jobs #projects #money #earning #skills #dev 

How do I start working as a freelancer?
Reid  Rohan

Reid Rohan


Bull: Premium Queue Package for Handling Distributed Jobs & Messages


The fastest, most reliable, Redis-based queue for Node. Carefully written for rock solid stability and atomicity.


  •  Minimal CPU usage due to a polling-free design.
  •  Robust design based on Redis.
  •  Delayed jobs.
  •  Schedule and repeat jobs according to a cron specification.
  •  Rate limiter for jobs.
  •  Retries.
  •  Priority.
  •  Concurrency.
  •  Pause/resume—globally or locally.
  •  Multiple job types per queue.
  •  Threaded (sandboxed) processing functions.
  •  Automatic recovery from process crashes.

And coming up on the roadmap...

  •  Job completion acknowledgement.
  •  Parent-child jobs relationships.


There are a few third-party UIs that you can use for monitoring:

Bull v3

Bull <= v2

Monitoring & Alerting

Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

Delayed jobs 
Global events  
Rate Limiter   
Sandboxed worker   
Repeatable jobs  
Atomic ops  
Optimized forJobs / MessagesJobsMessagesJobs


npm install bull --save


yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.


We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier For commits please follow conventional commits convention All code must pass lint rules and test suites before it can be merged into develop.

Quick Guide

Basic Usage

var Queue = require('bull');

var videoQueue = new Queue('video transcoding', 'redis://');
var audioQueue = new Queue('audio transcoding', {redis: {port: 6379, host: '', password: 'foobared'}}); // Specify Redis connection using object
var imageQueue = new Queue('image transcoding');
var pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function(job, done){

  // contains the custom data passed when the job was created
  // contains id of this job.

  // transcode video asynchronously and report progress

  // call done when finished

  // or give a error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');

audioQueue.process(function(job, done){
  // transcode audio asynchronously and report progress

  // call done when finished

  // or give a error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');

imageQueue.process(function(job, done){
  // transcode image asynchronously and report progress

  // call done when finished

  // or give a error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');

  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();

videoQueue.add({video: ''});
audioQueue.add({audio: ''});
imageQueue.add({image: ''});

Using promises

Alternatively, you can use return promises instead of using the done callback:

videoQueue.process(function(job){ // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(;

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function(job){
  // Do some heavy work

  return Promise.resolve(result);

And define the processor like this:

// Single process:

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

    // Check payments

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, {repeat: {cron: '15 3 * * *'}});

As a tip, check your expressions here to verify they are correct: cron expression descriptor

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

  // queue is paused now

  // queue is resumed now


A queue emits some useful events, for example...

.on('completed', function(job, result){
  // Job completed with output result!

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

var userJohn = new Queue('john');
var userLisa = new Queue('lisa');

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

  Queue = require('bull'),
  cluster = require('cluster');

var numWorkers = 8;
var queue = new Queue("test concurrent queue");

  for (var i = 0; i < numWorkers; i++) {

  cluster.on('online', function(worker) {
    // Lets create a few jobs for the queue workers
    for(var i=0; i<500; i++){
      queue.add({foo: 'bar'});

  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + + ' died');
  queue.process(function(job, jobDone){
    console.log("Job done by worker",,;


For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!

Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor aways crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).

📻 News and updates

Follow me on Twitter for important news and updates.

🛠 Tutorials

You can find tutorials and news in this blog:

Used by

Bull is popular among large and small organizations, like the following ones:



If you want to start using the next major version of Bull written entirely in Typescript you are welcome to the new repo here. Otherwise you are very welcome to still use Bull, which is a safe, battle tested codebase.

🚀 Sponsors 🚀


If you need high quality production Redis instances for your Bull projects, please consider subscribing to RedisGreen, leaders in Redis hosting that works perfectly with Bull. Use the promo code "BULLMQ" when signing up to help us sponsor the development of Bull!

Official FrontEnd, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at

Check the new Guide!

Download Details:

Author: Optimalbits
Source Code: 
License: View license

#javascript #node #queue #job #schedule 

Bull: Premium Queue Package for Handling Distributed Jobs & Messages

Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)


Job scheduler for Ruby (at, cron, in and every jobs).

It uses threads.

Note: maybe are you looking for the README of rufus-scheduler 2.x? (especially if you're using Dashing which is stuck on rufus-scheduler 2.0.24)


# quickstart.rb

require 'rufus-scheduler'

scheduler = '3s' do
  puts 'Hello... Rufus'

  # let the current thread join the scheduler thread
  # (please note that this join should be removed when scheduling
  # in a web application (Rails and friends) initializer)

(run with ruby quickstart.rb)

Various forms of scheduling are supported:

require 'rufus-scheduler'

scheduler =

# ... '10d' do
  # do something in 10 days
end '2030/12/12 23:30:00' do
  # do something at a given point in time

scheduler.every '3h' do
  # do something every 3 hours
scheduler.every '3h10m' do
  # do something every 3 hours and 10 minutes

scheduler.cron '5 0 * * *' do
  # do something every day, five minutes after midnight
  # (see "man 5 crontab" in your terminal)

# ...

Rufus-scheduler uses fugit for parsing time strings, et-orbi for pairing time and tzinfo timezones.


Rufus-scheduler (out of the box) is an in-process, in-memory scheduler. It uses threads.

It does not persist your schedules. When the process is gone and the scheduler instance with it, the schedules are gone.

A rufus-scheduler instance will go on scheduling while it is present among the objects in a Ruby process. To make it stop scheduling you have to call its #shutdown method.

related and similar gems

  • Whenever - let cron call back your Ruby code, trusted and reliable cron drives your schedule
  • ruby-clock - a clock process / job scheduler for Ruby
  • Clockwork - rufus-scheduler inspired gem
  • Crono - an in-Rails cron scheduler
  • PerfectSched - highly available distributed cron built on Sequel and more

(please note: rufus-scheduler is not a cron replacement)

note about the 3.0 line

It's a complete rewrite of rufus-scheduler.

There is no EventMachine-based scheduler anymore.

I don't know what this Ruby thing is, where are my Rails?

I'll drive you right to the tracks.

notable changes:

  • As said, no more EventMachine-based scheduler
  • scheduler.every('100') { will schedule every 100 seconds (previously, it would have been 0.1s). This aligns rufus-scheduler with Ruby's sleep(100)
  • The scheduler isn't catching the whole of Exception anymore, only StandardError
  • The error_handler is #on_error (instead of #on_exception), by default it now prints the details of the error to $stderr (used to be $stdout)
  • Rufus::Scheduler::TimeOutError renamed to Rufus::Scheduler::TimeoutError
  • Introduction of "interval" jobs. Whereas "every" jobs are like "every 10 minutes, do this", interval jobs are like "do that, then wait for 10 minutes, then do that again, and so on"
  • Introduction of a lockfile: true/filename mechanism to prevent multiple schedulers from executing
  • "discard_past" is on by default. If the scheduler (its host) sleeps for 1 hour and a every '10m' job is on, it will trigger once at wakeup, not 6 times (discard_past was false by default in rufus-scheduler 2.x). No intention to re-introduce discard_past: false in 3.0 for now.
  • Introduction of Scheduler #on_pre_trigger and #on_post_trigger callback points

getting help

So you need help. People can help you, but first help them help you, and don't waste their time. Provide a complete description of the issue. If it works on A but not on B and others have to ask you: "so what is different between A and B" you are wasting everyone's time.

"hello", "please" and "thanks" are not swear words.

Go read how to report bugs effectively, twice.

Update: might help help you.

on Gitter

You can find help via chat over at It's fugit, et-orbi, and rufus-scheduler combined chat room.

Please be courteous.


Yes, issues can be reported in rufus-scheduler issues, I'd actually prefer bugs in there. If there is nothing wrong with rufus-scheduler, a Stack Overflow question is better.



Rufus-scheduler supports five kinds of jobs. in, at, every, interval and cron jobs.

Most of the rufus-scheduler examples show block scheduling, but it's also OK to schedule handler instances or handler classes.

in, at, every, interval, cron

In and at jobs trigger once.

require 'rufus-scheduler'

scheduler = '10d' do
  puts "10 days reminder for review X!"
end '2014/12/24 2000' do
  puts "merry xmas!"

In jobs are scheduled with a time interval, they trigger after that time elapsed. At jobs are scheduled with a point in time, they trigger when that point in time is reached (better to choose a point in the future).

Every, interval and cron jobs trigger repeatedly.

require 'rufus-scheduler'

scheduler =

scheduler.every '3h' do
  puts "change the oil filter!"

scheduler.interval '2h' do
  puts "thinking..."
  puts sleep(rand * 1000)
  puts "thought."

scheduler.cron '00 09 * * *' do
  puts "it's 9am! good morning!"

Every jobs try hard to trigger following the frequency they were scheduled with.

Interval jobs trigger, execute and then trigger again after the interval elapsed. (every jobs time between trigger times, interval jobs time between trigger termination and the next trigger start).

Cron jobs are based on the venerable cron utility (man 5 crontab). They trigger following a pattern given in (almost) the same language cron uses.


#schedule_x vs #x

schedule_in, schedule_at, schedule_cron, etc will return the new Job instance.

in, at, cron will return the new Job instance's id (a String).

job_id = '10d' do
    # ...
job = scheduler.job(job_id)

# versus

job =
  scheduler.schedule_in '10d' do
    # ...

# also

job = '10d', job: true do
    # ...

#schedule and #repeat

Sometimes it pays to be less verbose.

The #schedule methods schedules an at, in or cron job. It just decides based on its input. It returns the Job instance.

scheduler.schedule '10d' do; end.class
  # => Rufus::Scheduler::InJob

scheduler.schedule '2013/12/12 12:30' do; end.class
  # => Rufus::Scheduler::AtJob

scheduler.schedule '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

The #repeat method schedules and returns an EveryJob or a CronJob.

scheduler.repeat '10d' do; end.class
  # => Rufus::Scheduler::EveryJob

scheduler.repeat '* * * * *' do; end.class
  # => Rufus::Scheduler::CronJob

(Yes, no combination here gives back an IntervalJob).

schedule blocks arguments (job, time)

A schedule block may be given 0, 1 or 2 arguments.

The first argument is "job", it's simply the Job instance involved. It might be useful if the job is to be unscheduled for some reason.

scheduler.every '10m' do |job|

  status = determine_pie_status

  if status == 'burnt' || status == 'cooked'

The second argument is "time", it's the time when the job got cleared for triggering (not

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

"every" jobs and changing the next_time in-flight

It's OK to change the next_time of an every job in-flight:

scheduler.every '10m' do |job|

  # ...

  status = determine_pie_status

  job.next_time = + 30 * 60 if status == 'burnt'
    # if burnt, wait 30 minutes for the oven to cool a bit

It should work as well with cron jobs, not so with interval jobs whose next_time is computed after their block ends its current run.

scheduling handler instances

It's OK to pass any object, as long as it responds to #call(), when scheduling:

class Handler
  def, time)
    p "- Handler called for #{} at #{time}"
end '10d', Handler

# or

class OtherHandler
  def initialize(name)
    @name = name
  def call(job, time)
    p "* #{time} - Handler #{name.inspect} called for #{}"

oh ='Doe')

scheduler.every '10m', oh '3d5m', oh

The call method must accept 2 (job, time), 1 (job) or 0 arguments.

Note that time is the time when the job got cleared for triggering. If there are mutexes involved, now = mutex_wait_time + time...

scheduling handler classes

One can pass a handler class to rufus-scheduler when scheduling. Rufus will instantiate it and that instance will be available via job#handler.

class MyHandler
  attr_reader :count
  def initialize
    @count = 0
  def call(job)
    @count += 1
    puts ". #{self.class} called at #{} (#{@count})"

job = scheduler.schedule_every '35m', MyHandler

  # => #<MyHandler:0x000000021034f0>
  # => 0

If you want to keep that "block feeling":

job_id =
  scheduler.every '10m', do
    def call(job)
      puts ". hello #{self.inspect} at #{}"

pause and resume the scheduler

The scheduler can be paused via the #pause and #resume methods. One can determine if the scheduler is currently paused by calling #paused?.

While paused, the scheduler still accepts schedules, but no schedule will get triggered as long as #resume isn't called.

job options

name: string

Sets the name of the job.

scheduler.cron '*/15 8 * * *', name: 'Robert' do |job|
  puts "A, it's #{} and my name is #{}"

job1 =
  scheduler.schedule_cron '*/30 9 * * *', n: 'temporary' do |job|
    puts "B, it's #{} and my name is #{}"
# ... = 'Beowulf'

blocking: true

By default, jobs are triggered in their own, new threads. When blocking: true, the job is triggered in the scheduler thread (a new thread is not created). Yes, while a blocking job is running, the scheduler is not scheduling.

overlap: false

Since, by default, jobs are triggered in their own new threads, job instances might overlap. For example, a job that takes 10 minutes and is scheduled every 7 minutes will have overlaps.

To prevent overlap, one can set overlap: false. Such a job will not trigger if one of its instances is already running.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

mutex: mutex_instance / mutex_name / array of mutexes

When a job with a mutex triggers, the job's block is executed with the mutex around it, preventing other jobs with the same mutex from entering (it makes the other jobs wait until it exits the mutex).

This is different from overlap: false, which is, first, limited to instances of the same job, and, second, doesn't make the incoming job instance block/wait but give up.

:mutex accepts a mutex instance or a mutex name (String). It also accept an array of mutex names / mutex instances. It allows for complex relations between jobs.

Array of mutexes: original idea and implementation by Rainux Luo

Note: creating lots of different mutexes is OK. Rufus-scheduler will place them in its Scheduler#mutexes hash... And they won't get garbage collected.

The :overlap option is considered before the :mutex option when the scheduler is reviewing jobs for triggering.

timeout: duration or point in time

It's OK to specify a timeout when scheduling some work. After the time specified, it gets interrupted via a Rufus::Scheduler::TimeoutError. '10d', timeout: '1d' do
    # ... do something
  rescue Rufus::Scheduler::TimeoutError
    # ... that something got interrupted after 1 day

The :timeout option accepts either a duration (like "1d" or "2w3d") or a point in time (like "2013/12/12 12:00").

:first_at, :first_in, :first, :first_time

This option is for repeat jobs (cron / every) only.

It's used to specify the first time after which the repeat job should trigger for the first time.

In the case of an "every" job, this will be the first time (modulo the scheduler frequency) the job triggers. For a "cron" job as well, the :first will point to the first time the job has to trigger, the following trigger times are then determined by the cron string.

scheduler.every '2d', first_at: + 10 * 3600 do
  # ... every two days, but start in 10 hours

scheduler.every '2d', first_in: '10h' do
  # ... every two days, but start in 10 hours

scheduler.cron '00 14 * * *', first_in: '3d' do
  # ... every day at 14h00, but start after 3 * 24 hours

:first, :first_at and :first_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the first_at (a Time instance) directly:

job.first_at = + 10
job.first_at = Rufus::Scheduler.parse('2029-12-12')

The first argument (in all its flavours) accepts a :now or :immediately value. That schedules the first occurrence for immediate triggering. Consider:

require 'rufus-scheduler'

s =

n =; p [ :scheduled_at, n, n.to_f ]

s.every '3s', first: :now do
  n =; p [ :in, n, n.to_f ]


that'll output something like:

[:scheduled_at, 2014-01-22 22:21:21 +0900, 1390396881.344438]
[:in, 2014-01-22 22:21:21 +0900, 1390396881.6453865]
[:in, 2014-01-22 22:21:24 +0900, 1390396884.648807]
[:in, 2014-01-22 22:21:27 +0900, 1390396887.651686]
[:in, 2014-01-22 22:21:30 +0900, 1390396890.6571937]

:last_at, :last_in, :last

This option is for repeat jobs (cron / every) only.

It indicates the point in time after which the job should unschedule itself.

scheduler.cron '5 23 * * *', last_in: '10d' do
  # ... do something every evening at 23:05 for 10 days

scheduler.every '10m', last_at: + 10 * 3600 do
  # ... do something every 10 minutes for 10 hours

scheduler.every '10m', last_in: 10 * 3600 do
  # ... do something every 10 minutes for 10 hours

:last, :last_at and :last_in all accept a point in time or a duration (number or time string). Use the symbol you think makes your schedule more readable.

Note: it's OK to change the last_at (nil or a Time instance) directly:

job.last_at = nil
  # remove the "last" bound

job.last_at = Rufus::Scheduler.parse('2029-12-12')
  # set the last bound

times: nb of times (before auto-unscheduling)

One can tell how many times a repeat job (CronJob or EveryJob) is to execute before unscheduling by itself.

scheduler.every '2d', times: 10 do
  # ... do something every two days, but not more than 10 times

scheduler.cron '0 23 * * *', times: 31 do
  # ... do something every day at 23:00 but do it no more than 31 times

It's OK to assign nil to :times to make sure the repeat job is not limited. It's useful when the :times is determined at scheduling time.

scheduler.cron '0 23 * * *', times: (nolimit ? nil : 10) do
  # ...

The value set by :times is accessible in the job. It can be modified anytime.

job =
  scheduler.cron '0 23 * * *' do
    # ...

# later on...

job.times = 10
  # 10 days and it will be over

Job methods

When calling a schedule method, the id (String) of the job is returned. Longer schedule methods return Job instances directly. Calling the shorter schedule methods with the job: true also returns Job instances instead of Job ids (Strings).

  require 'rufus-scheduler'

  scheduler =

  job_id = '10d' do
      # ...

  job =
    scheduler.schedule_in '1w' do
      # ...

  job = '1w', job: true do
      # ...

Those Job instances have a few interesting methods / properties:

id, job_id

Returns the job id.

job = scheduler.schedule_in('10d') do; end
  # => "in_1374072446.8923042_0.0_0"


Returns the scheduler instance itself.


Returns the options passed at the Job creation.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => { :tag => 'hello' }


Returns the original schedule.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => '10d'

callable, handler

callable() returns the scheduled block (or the call method of the callable object passed in lieu of a block)

handler() returns nil if a block was scheduled and the instance scheduled otherwise.

# when passing a block

job =
  scheduler.schedule_in('10d') do
    # ...

  # => nil
  # => #<Proc:0x00000001dc6f58@/home/jmettraux/whatever.rb:115>


# when passing something else than a block

class MyHandler
  attr_reader :counter
  def initialize
    @counter = 0
  def call(job, time)
    @counter = @counter + 1

job = scheduler.schedule_in('10d',

  # => #<Method: MyHandler#call>
  # => #<MyHandler:0x0000000163ae88 @counter=0>


Added to rufus-scheduler 3.8.0.

Returns the array [ 'path/to/file.rb', 123 ] like Proc#source_location does.

require 'rufus-scheduler'

scheduler =

job = scheduler.schedule_every('2h') { p }

p job.source_location
  # ==> [ '/home/jmettraux/rufus-scheduler/test.rb', 6 ]


Returns the Time instance when the job got created.

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => 2013-07-17 23:48:54 +0900


Returns the last time the job triggered (is usually nil for AtJob and InJob).

job = scheduler.schedule_every('10s') do; end

  # => 2013-07-17 23:48:54 +0900
  # => nil (since we've just scheduled it)

# after 10 seconds

  # => 2013-07-17 23:48:54 +0900 (same as above)
  # => 2013-07-17 23:49:04 +0900


Returns the previous #next_time

scheduler.every('10s') do |job|
  puts "job scheduled for #{job.previous_time} triggered at #{}"
  puts "next time will be around #{job.next_time}"
  puts "."

last_work_time, mean_work_time

The job keeps track of how long its work was in the last_work_time attribute. For a one time job (in, at) it's probably not very useful.

The attribute mean_work_time contains a computed mean work time. It's recomputed after every run (if it's a repeat job).


Returns an array of EtOrbi::EoTime instances (Time instances with a designated time zone), listing the n next occurrences for this job.

Please note that for "interval" jobs, a mean work time is computed each time and it's used by this #next_times(n) method to approximate the next times beyond the immediate next time.


Unschedule the job, preventing it from firing again and removing it from the schedule. This doesn't prevent a running thread for this job to run until its end.


Returns the list of threads currently "hosting" runs of this Job instance.


Interrupts all the work threads currently running for this job instance. They discard their work and are free for their next run (of whatever job).

Note: this doesn't unschedule the Job instance.

Note: if the job is pooled for another run, a free work thread will probably pick up that next run and the job will appear as running again. You'd have to unschedule and kill to make sure the job doesn't run again.


Returns true if there is at least one running Thread hosting a run of this Job instance.


Returns true if the job is scheduled (is due to trigger). For repeat jobs it should return true until the job gets unscheduled. "at" and "in" jobs will respond with false as soon as they start running (execution triggered).

pause, resume, paused?, paused_at

These four methods are only available to CronJob, EveryJob and IntervalJob instances. One can pause or resume such jobs thanks to these methods.

job =
  scheduler.schedule_every('10s') do
    # ...

  # => 2013-07-20 01:22:22 +0900
  # => true
  # => 2013-07-20 01:22:22 +0900

  # => nil


Returns the list of tags attached to this Job instance.

By default, returns an empty array.

job = scheduler.schedule_in('10d') do; end
  # => []

job = scheduler.schedule_in('10d', tag: 'hello') do; end
  # => [ 'hello' ]

[]=, [], key?, has_key?, keys, values, and entries

Threads have thread-local variables, similarly Rufus-scheduler jobs have job-local variables. Those are more like a dict with thread-safe access.

job =
  @scheduler.schedule_every '1s' do |job|
    job[:timestamp] =
    job[:counter] ||= 0
    job[:counter] += 1

sleep 3.6

  # => 3

job.key?(:timestamp) # => true
job.has_key?(:timestamp) # => true
job.keys # => [ :timestamp, :counter ]

Locals can be set at schedule time:

job0 =
  @scheduler.schedule_cron '*/15 12 * * *', locals: { a: 0 } do
    # ...
job1 =
  @scheduler.schedule_cron '*/15 13 * * *', l: { a: 1 } do
    # ...

One can fetch the Hash directly with Job#locals. Of course, direct manipulation is not thread-safe.

job.locals.entries do |k, v|
  p "#{k}: #{v}"


Job instances have a #call method. It simply calls the scheduled block or callable immediately.

job =
  @scheduler.schedule_every '10m' do |job|
    # ...

Warning: the Scheduler#on_error handler is not involved. Error handling is the responsibility of the caller.

If the call has to be rescued by the error handler of the scheduler, call(true) might help:

require 'rufus-scheduler'

s =

def s.on_error(job, err)
  if job
    p [ 'error in scheduled job', job.class, job.original, err.message ]
    p [ 'error while scheduling', err.message ]
  p $!

job =
  s.schedule_in('1d') do
    fail 'again'
  # true lets the error_handler deal with error in the job call

AtJob and InJob methods


Returns when the job will trigger (hopefully).


An alias for time.

EveryJob, IntervalJob and CronJob methods


Returns the next time the job will trigger (hopefully).


Returns how many times the job fired.

EveryJob methods


It returns the scheduling frequency. For a job scheduled "every 20s", it's 20.

It's used to determine if the job frequency is higher than the scheduler frequency (it raises an ArgumentError if that is the case).

IntervalJob methods


Returns the interval scheduled between each execution of the job.

Every jobs use a time duration between each start of their execution, while interval jobs use a time duration between the end of an execution and the start of the next.

CronJob methods


An expensive method to run, it's brute. It caches its results. By default it runs for 2017 (a non leap-year).

  require 'rufus-scheduler'

  Rufus::Scheduler.parse('* * * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf4520c5e8
    #      @span=31536000.0, @delta_min=60, @delta_max=60,
    #      @occurrences=525600, @span_years=1.0, @yearly_occurrences=525600.0>
      # Occurs 525600 times in a span of 1 year (2017) and 1 day.
      # There are least 60 seconds between "triggers" and at most 60 seconds.

  Rufus::Scheduler.parse('0 12 * * *').brute_frequency
    # => #<Fugit::Cron::Frequency:0x00007fdf451ec6d0
    #      @span=31536000.0, @delta_min=86400, @delta_max=86400,
    #      @occurrences=365, @span_years=1.0, @yearly_occurrences=365.0>
  Rufus::Scheduler.parse('0 12 * * *').brute_frequency.to_debug_s
    # => "dmin: 1D, dmax: 1D, ocs: 365, spn: 52W1D, spnys: 1, yocs: 365"
      # 365 occurrences, at most 1 day between each, at least 1 day.

The CronJob#frequency method found in rufus-scheduler < 3.5 has been retired.

looking up jobs


The scheduler #job(job_id) method can be used to look up Job instances.

  require 'rufus-scheduler'

  scheduler =

  job_id = '10d' do
      # ...

  # later on...

  job = scheduler.job(job_id)

Scheduler #jobs #at_jobs #in_jobs #every_jobs #interval_jobs and #cron_jobs

Are methods for looking up lists of scheduled Job instances.

Here is an example:

  # let's unschedule all the at jobs


Scheduler#jobs(tag: / tags: x)

When scheduling a job, one can specify one or more tags attached to the job. These can be used to look up the job later on. '10d', tag: 'main_process' do
    # ...
  end '10d', tags: [ 'main_process', 'side_dish' ] do
    # ...

  # ...

  jobs = 'main_process')
    # find all the jobs with the 'main_process' tag

  jobs = [ 'main_process', 'side_dish' ]
    # find all the jobs with the 'main_process' AND 'side_dish' tags


Returns the list of Job instance that have currently running instances.

Whereas other "_jobs" method scan the scheduled job list, this method scans the thread list to find the job. It thus comprises jobs that are running but are not scheduled anymore (that happens for at and in jobs).

misc Scheduler methods


Unschedule a job given directly or by its id.


Shuts down the scheduler, ceases any scheduler/triggering activity.


Shuts down the scheduler, waits (blocks) until all the jobs cease running.

Scheduler#shutdown(wait: n)

Shuts down the scheduler, waits (blocks) at most n seconds until all the jobs cease running. (Jobs are killed after n seconds have elapsed).


Kills all the job (threads) and then shuts the scheduler down. Radical.


Returns true if the scheduler has been shut down.


Returns the Time instance at which the scheduler got started.

Scheduler #uptime / #uptime_s

Returns since the count of seconds for which the scheduler has been running.

#uptime_s returns this count in a String easier to grasp for humans, like "3d12m45s123".


Lets the current thread join the scheduling thread in rufus-scheduler. The thread comes back when the scheduler gets shut down.

#join is mostly used in standalone scheduling script (or tiny one file examples). Calling #join from a web application initializer will probably hijack the main thread and prevent the web application from being served. Do not put a #join in such a web application initializer file.


Returns all the threads associated with the scheduler, including the scheduler thread itself.


Lists the work threads associated with the scheduler. The query option defaults to :all.

  • :all : all the work threads
  • :active : all the work threads currently running a Job
  • :vacant : all the work threads currently not running a Job

Note that the main schedule thread will be returned if it is currently running a Job (ie one of those blocking: true jobs).


Returns true if the arg is a currently scheduled job (see Job#scheduled?).

Scheduler#occurrences(time0, time1)

Returns a hash { job => [ t0, t1, ... ] } mapping jobs to their potential trigger time within the [ time0, time1 ] span.

Please note that, for interval jobs, the #mean_work_time is used, so the result is only a prediction.

Scheduler#timeline(time0, time1)

Like #occurrences but returns a list [ [ t0, job0 ], [ t1, job1 ], ... ] of time + job pairs.

dealing with job errors

The easy, job-granular way of dealing with errors is to rescue and deal with them immediately. The two next sections show examples. Skip them for explanations on how to deal with errors at the scheduler level.

block jobs

As said, jobs could take care of their errors themselves.

scheduler.every '10m' do
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80

callable jobs

Jobs are not only shrunk to blocks, here is how the above would look like with a dedicated class.

scheduler.every '10m', do
  def call(job)
    # do something that might fail...
  rescue => e
    $stderr.puts '-' * 80
    $stderr.puts e.message
    $stderr.puts e.stacktrace
    $stderr.puts '-' * 80

TODO: talk about callable#on_error (if implemented)

(see scheduling handler instances and scheduling handler classes for more about those "callable jobs")


By default, rufus-scheduler intercepts all errors (that inherit from StandardError) and dumps abundant details to $stderr.

If, for example, you'd like to divert that flow to another file (descriptor), you can reassign $stderr for the current Ruby process

$stderr ='/var/log/myapplication.log', 'ab')

or, you can limit that reassignement to the scheduler itself

scheduler.stderr ='/var/log/myapplication.log', 'ab')

Rufus::Scheduler#on_error(job, error)

We've just seen that, by default, rufus-scheduler dumps error information to $stderr. If one needs to completely change what happens in case of error, it's OK to overwrite #on_error

def scheduler.on_error(job, error)

  Logger.warn("intercepted error in #{}: #{error.message}")

On Rails, the on_error method redefinition might look like:

def scheduler.on_error(job, error)

    "err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
    " in job #{job.inspect}")
  error.backtrace.each_with_index do |line, i|
      "err#{error.object_id} #{i}: #{line}")


Rufus::Scheduler #on_pre_trigger and #on_post_trigger callbacks

One can bind callbacks before and after jobs trigger:

s =

def s.on_pre_trigger(job, trigger_time)
  puts "triggering job #{}..."

def s.on_post_trigger(job, trigger_time)
  puts "triggered job #{}."

s.every '1s' do
  # ...

The trigger_time is the time at which the job triggers. It might be a bit before

Warning: these two callbacks are executed in the scheduler thread, not in the work threads (the threads where the job execution really happens).


One can create an around callback which will wrap a job:

def s.around_trigger(job)
  t =
  puts "Starting job #{}..."
  puts "job #{} finished in #{} seconds."

The around callback is executed in the thread.

Rufus::Scheduler#on_pre_trigger as a guard

Returning false in on_pre_trigger will prevent the job from triggering. Returning anything else (nil, -1, true, ...) will let the job trigger.

Note: your business logic should go in the scheduled block itself (or the scheduled instance). Don't put business logic in on_pre_trigger. Return false for admin reasons (backend down, etc), not for business reasons that are tied to the job itself.

def s.on_pre_trigger(job, trigger_time)

  return false if Backend.down?

  puts "triggering job #{}..."
end options


By default, rufus-scheduler sleeps 0.300 second between every step. At each step it checks for jobs to trigger and so on.

The :frequency option lets you change that 0.300 second to something else.

scheduler = 5)

It's OK to use a time string to specify the frequency.

scheduler = '2h10m')
  # this scheduler will sleep 2 hours and 10 minutes between every "step"

Use with care.

lockfile: "mylockfile.txt"

This feature only works on OSes that support the flock (man 2 flock) call.

Starting the scheduler with lockfile: '.rufus-scheduler.lock' will make the scheduler attempt to create and lock the file .rufus-scheduler.lock in the current working directory. If that fails, the scheduler will not start.

The idea is to guarantee only one scheduler (in a group of schedulers sharing the same lockfile) is running.

This is useful in environments where the Ruby process holding the scheduler gets started multiple times.

If the lockfile mechanism here is not sufficient, you can plug your custom mechanism. It's explained in advanced lock schemes below.


(since rufus-scheduler 3.0.9)

The scheduler lock is an object that responds to #lock and #unlock. The scheduler calls #lock when starting up. If the answer is false, the scheduler stops its initialization work and won't schedule anything.

Here is a sample of a scheduler lock that only lets the scheduler on host "" start:

class HostLock
  def initialize(lock_name)
    @lock_name = lock_name
  def lock
    @lock_name == `hostname -f`.strip
  def unlock

scheduler =''))

By default, the scheduler_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that returns true.


(since rufus-scheduler 3.0.9)

The trigger lock in an object that responds to #lock. The scheduler calls that method on the job lock right before triggering any job. If the answer is false, the trigger doesn't happen, the job is not done (at least not in this scheduler).

Here is a (stupid) PingLock example, it'll only trigger if an "other host" is not responding to ping. Do not use that in production, you don't want to fork a ping process for each trigger attempt...

class PingLock
  def initialize(other_host)
    @other_host = other_host
  def lock
    ! system("ping -c 1 #{@other_host}")

scheduler =''))

By default, the trigger_lock is an instance of Rufus::Scheduler::NullLock, with a #lock that always returns true.

As explained in advanced lock schemes, another way to tune that behaviour is by overriding the scheduler's #confirm_lock method. (You could also do that with an #on_pre_trigger callback).


In rufus-scheduler 2.x, by default, each job triggering received its own, brand new, thread of execution. In rufus-scheduler 3.x, execution happens in a pooled work thread. The max work thread count (the pool size) defaults to 28.

One can set this maximum value when starting the scheduler.

scheduler = 77)

It's OK to increase the :max_work_threads of a running scheduler.

scheduler.max_work_threads += 10


Do not want to store a reference to your rufus-scheduler instance? Then Rufus::Scheduler.singleton can help, it returns a singleton instance of the scheduler, initialized the first time this class method is called.

Rufus::Scheduler.singleton.every '10s' { puts "hello, world!" }

It's OK to pass initialization arguments (like :frequency or :max_work_threads) but they will only be taken into account the first time .singleton is called.

Rufus::Scheduler.singleton(max_work_threads: 77)
Rufus::Scheduler.singleton(max_work_threads: 277) # no effect

The .s is a shortcut for .singleton.

Rufus::Scheduler.s.every '10s' { puts "hello, world!" }

advanced lock schemes

As seen above, rufus-scheduler proposes the :lockfile system out of the box. If in a group of schedulers only one is supposed to run, the lockfile mechanism prevents schedulers that have not set/created the lockfile from running.

There are situations where this is not sufficient.

By overriding #lock and #unlock, one can customize how schedulers lock.

This example was provided by Eric Lindvall:

class ZookeptScheduler < Rufus::Scheduler

  def initialize(zookeeper, opts={})
    @zk = zookeeper

  def lock
    @zk_locker = @zk.exclusive_locker('scheduler')
    @zk_locker.lock # returns true if the lock was acquired, false else

  def unlock

  def confirm_lock
    return false if down?
  rescue ZK::Exceptions::LockAssertionFailedError => e
    # we've lost the lock, shutdown (and return false to at least prevent
    # this job from triggering

This uses a zookeeper to make sure only one scheduler in a group of distributed schedulers runs.

The methods #lock and #unlock are overridden and #confirm_lock is provided, to make sure that the lock is still valid.

The #confirm_lock method is called right before a job triggers (if it is provided). The more generic callback #on_pre_trigger is called right after #confirm_lock.

:scheduler_lock and :trigger_lock

(introduced in rufus-scheduler 3.0.9).

Another way of prodiving #lock, #unlock and #confirm_lock to a rufus-scheduler is by using the :scheduler_lock and :trigger_lock options.

See :trigger_lock and :scheduler_lock.

The scheduler lock may be used to prevent a scheduler from starting, while a trigger lock prevents individual jobs from triggering (the scheduler goes on scheduling).

One has to be careful with what goes in #confirm_lock or in a trigger lock, as it gets called before each trigger.

Warning: you may think you're heading towards "high availability" by using a trigger lock and having lots of schedulers at hand. It may be so if you limit yourself to scheduling the same set of jobs at scheduler startup. But if you add schedules at runtime, they stay local to their scheduler. There is no magic that propagates the jobs to all the schedulers in your pack.

parsing cronlines and time strings

(Please note that fugit does the heavy-lifting parsing work for rufus-scheduler).

Rufus::Scheduler provides a class method .parse to parse time durations and cron strings. It's what it's using when receiving schedules. One can use it directly (no need to instantiate a Scheduler).

require 'rufus-scheduler'

  # => 777600.0
  # => 777600.0

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012').strftime('%c')
  # => 'Sun Nov 18 16:01:00 2012'

Rufus::Scheduler.parse('Sun Nov 18 16:01:00 2012 Europe/Berlin').strftime('%c %z')
  # => 'Sun Nov 18 15:01:00 2012 +0000'

  # => 0.1

Rufus::Scheduler.parse('* * * * *')
  # => #<Fugit::Cron:0x00007fb7a3045508
  #      @original="* * * * *", @cron_s=nil,
  #      @seconds=[0], @minutes=nil, @hours=nil, @monthdays=nil, @months=nil,
  #      @weekdays=nil, @zone=nil, @timezone=nil>

It returns a number when the input is a duration and a Fugit::Cron instance when the input is a cron string.

It will raise an ArgumentError if it can't parse the input.

Beyond .parse, there are also .parse_cron and .parse_duration, for finer granularity.

There is an interesting helper method named .to_duration_hash:

require 'rufus-scheduler'

  # => { :m => 1 }
  # => { :m => 1, :s => 2, :ms => 127 }

Rufus::Scheduler.to_duration_hash(62.127, drop_seconds: true)
  # => { :m => 1 }

cronline notations specific to rufus-scheduler

first Monday, last Sunday et al

To schedule something at noon every first Monday of the month:

scheduler.cron('00 12 * * mon#1') do
  # ...

To schedule something at noon the last Sunday of every month:

scheduler.cron('00 12 * * sun#-1') do
  # ...
# OR
scheduler.cron('00 12 * * sun#L') do
  # ...

Such cronlines can be tested with scripts like:

require 'rufus-scheduler'
  # => 2013-10-26 07:07:08 +0900
Rufus::Scheduler.parse('* * * * mon#1').next_time.to_s
  # => 2013-11-04 00:00:00 +0900

L (last day of month)

L can be used in the "day" slot:

In this example, the cronline is supposed to trigger every last day of the month at noon:

require 'rufus-scheduler'
  # => 2013-10-26 07:22:09 +0900
Rufus::Scheduler.parse('00 12 L * *').next_time.to_s
  # => 2013-10-31 12:00:00 +0900

negative day (x days before the end of the month)

It's OK to pass negative values in the "day" slot:

scheduler.cron '0 0 -5 * *' do
  # do it at 00h00 5 days before the end of the month...

Negative ranges (-10--5-: 10 days before the end of the month to 5 days before the end of the month) are OK, but mixed positive / negative ranges will raise an ArgumentError.

Negative ranges with increments (-10---2/2) are accepted as well.

Descending day ranges are not accepted (10-8 or -8--10 for example).

a note about timezones

Cron schedules and at schedules support the specification of a timezone.

scheduler.cron '0 22 * * 1-5 America/Chicago' do
  # the job...
end '2013-12-12 14:00 Pacific/Samoa' do
  puts "it's tea time!"

# or even

Rufus::Scheduler.parse("2013-12-12 14:00 Pacific/Saipan")
  # => #<Rufus::Scheduler::ZoTime:0x007fb424abf4e8 @seconds=1386820800.0, @zone=#<TZInfo::DataTimezone: Pacific/Saipan>, @time=nil>

I get "zotime.rb:41:in `initialize': cannot determine timezone from nil"

For when you see an error like:

  in `initialize':
    cannot determine timezone from nil (etz:nil,tnz:"中国标准时间",tzid:nil)
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `new'
    from rufus-scheduler/lib/rufus/scheduler/zotime.rb:198:in `now'
    from rufus-scheduler/lib/rufus/scheduler.rb:561:in `start'

It may happen on Windows or on systems that poorly hint to Ruby which timezone to use. It should be solved by setting explicitly the ENV['TZ'] before the scheduler instantiation:

ENV['TZ'] = 'Asia/Shanghai'
scheduler =
scheduler.every '2s' do
  puts "#{} Hello #{ENV['TZ']}!"

On Rails you might want to try with:

ENV['TZ'] = # Rails only
scheduler =
scheduler.every '2s' do
  puts "#{} Hello #{ENV['TZ']}!"

(Hat tip to Alexander in gh-230)

Rails sets its timezone under config/application.rb.

Rufus-Scheduler 3.3.3 detects the presence of Rails and uses its timezone setting (tested with Rails 4), so setting ENV['TZ'] should not be necessary.

The value can be determined thanks to

Use a "continent/city" identifier (for example "Asia/Shanghai"). Do not use an abbreviation (not "CST") and do not use a local time zone name (not "中国标准时间" nor "Eastern Standard Time" which, for instance, points to a time zone in America and to another one in Australia...).

If the error persists (and especially on Windows), try to add the tzinfo-data to your Gemfile, as in:

gem 'tzinfo-data'

or by manually requiring it before requiring rufus-scheduler (if you don't use Bundler):

require 'tzinfo/data'
require 'rufus-scheduler'

so Rails?

Yes, I know, all of the above is boring and you're only looking for a snippet to paste in your Ruby-on-Rails application to schedule...

Here is an example initializer:

# config/initializers/scheduler.rb

require 'rufus-scheduler'

# Let's use the rufus-scheduler singleton
s = Rufus::Scheduler.singleton

# Stupid recurrent task...
s.every '1m' do "hello, it's #{}"

And now you tell me that this is good, but you want to schedule stuff from your controller.


class ScheController < ApplicationController

  # GET /sche/
  def index

    job_id = '5s' do "time flies, it's now #{}"

    render text: "scheduled job #{job_id}"

The rufus-scheduler singleton is instantiated in the config/initializers/scheduler.rb file, it's then available throughout the webapp via Rufus::Scheduler.singleton.

Warning: this works well with single-process Ruby servers like Webrick and Thin. Using rufus-scheduler with Passenger or Unicorn requires a bit more knowledge and tuning, gently provided by a bit of googling and reading, see Faq above.

avoid scheduling when running the Ruby on Rails console

(Written in reply to gh-186)

If you don't want rufus-scheduler to trigger anything while running the Ruby on Rails console, running for tests/specs, or running from a Rake task, you can insert a conditional return statement before jobs are added to the scheduler instance:

# config/initializers/scheduler.rb

require 'rufus-scheduler'

return if defined?(Rails::Console) || Rails.env.test? || File.split($PROGRAM_NAME).last == 'rake'
  # do not schedule when Rails is run from its console, for a test/spec, or
  # from a Rake task

# return if $PROGRAM_NAME.include?('spring')
  # see

s = Rufus::Scheduler.singleton

s.every '1m' do "hello, it's #{}"

(Beware later version of Rails where Spring takes care pre-running the initializers. Running spring stop or disabling Spring might be necessary in some cases to see changes to initializers being taken into account.)

rails server -d

(Written in reply to )

There is the handy rails server -d that starts a development Rails as a daemon. The annoying thing is that the scheduler as seen above is started in the main process that then gets forked and daemonized. The rufus-scheduler thread (and any other thread) gets lost, no scheduling happens.

I avoid running -d in development mode and bother about daemonizing only for production deployment.

These are two well crafted articles on process daemonization, please read them:

If, anyway, you need something like rails server -d, why not try bundle exec unicorn -D instead? In my (limited) experience, it worked out of the box (well, had to add gem 'unicorn' to Gemfile first).

executor / reloader

You might benefit from wraping your scheduled code in the executor or reloader. Read more here:


see getting help above.

Author: jmettraux
Source code:
License: MIT license


Rufus Scheduler: Job Scheduler for Ruby (at, Cron, in and Every Jobs)
Rupert  Beatty

Rupert Beatty


Laravel-failed-job-monitor: Get Notified When A Queued Job Fails

Get notified when a queued job fails

This package sends notifications if a queued job fails. Out of the box it can send a notification via mail and/or Slack. It leverages Laravel's native notification system.


For Laravel versions 5.8 and 6.x, use v3.x of this package.

You can install the package via composer:

composer require spatie/laravel-failed-job-monitor

If you intend to use Slack notifications you should also install the guzzle client:

composer require guzzlehttp/guzzle

The service provider will automatically be registered.

Next, you must publish the config file:

php artisan vendor:publish --tag=failed-job-monitor-config

This is the contents of the default configuration file. Here you can specify the notifiable to which the notifications should be sent. The default notifiable will use the variables specified in this config file.

return [

     * The notification that will be sent when a job fails.
    'notification' => \Spatie\FailedJobMonitor\Notification::class,

     * The notifiable to which the notification will be sent. The default
     * notifiable will use the mail and slack configuration specified
     * in this config file.
    'notifiable' => \Spatie\FailedJobMonitor\Notifiable::class,

     * By default notifications are sent for all failures. You can pass a callable to filter
     * out certain notifications. The given callable will receive the notification. If the callable
     * return false, the notification will not be sent.
    'notificationFilter' => null,

     * The channels to which the notification will be sent.
    'channels' => ['mail', 'slack'],

    'mail' => [
        'to' => '',

    'slack' => [
        'webhook_url' => env('FAILED_JOB_SLACK_WEBHOOK_URL'),


Customizing the notification

The default notification class provided by this package has support for mail and Slack.

If you want to customize the notification you can specify your own notification class in the config file.

// config/failed-job-monitor.php
return [
    'notification' => \App\Notifications\CustomNotificationForFailedJobMonitor::class,

Customizing the notifiable

The default notifiable class provided by this package use the channels, mail and slack keys from the config file to determine how notifications must be sent

If you want to customize the notifiable you can specify your own notifiable class in the config file.

// config/failed-job-monitor.php
return [
    'notifiable' => \App\CustomNotifiableForFailedJobMonitor::class,

Filtering the notifications

To filter the notifications, pass a closure to the notificationFilter.

// config/failed-job-monitor.php
return [
    'notificationFilter' => function (Spatie\FailedJobMonitor\Notification $notification): bool
        return true;

The above works only that Laravel doesn't support closure serialization. Thus you will get the following error when you run php artisan config:cache

LogicException  : Your configuration files are not serializable.

It would thus be better to create a separate class and use a callable as the callback.


namespace App\Notifications;

use Spatie\FailedJobMonitor\Notification;

class FailedJobNotification
    public function notificationFilter(Notification $notification): bool
        return true;

And reference it in the configuration file.

// config/failed-job-monitor.php
return [
    'notificationFilter' => [App\Notifications\FailedJobNotification::class, 'notificationFilter'],


If you configured the package correctly, you're done. You'll receive a notification when a queued job fails.


Please see CHANGELOG for more information what has changed recently.


composer test

Support us

We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.


Please see CONTRIBUTING for details.


If you've found a bug regarding security please mail instead of using the issue tracker.


A big thank you to Egor Talantsev for his help creating v2 of the package.

Author: Spatie
Source Code: 
License: MIT license

#laravel #job #php #notifications 

Laravel-failed-job-monitor: Get Notified When A Queued Job Fails
Royce  Reinger

Royce Reinger


Que: A Ruby Job Queue That Uses PostgreSQL's Advisory Locks for Speed


This README and the rest of the docs on the master branch all refer to Que 2.x. For older versions, please refer to the docs on the respective branches: 1.x, or 0.x.

TL;DR: Que is a high-performance job queue that improves the reliability of your application by protecting your jobs with the same ACID guarantees as the rest of your data.

Que ("keɪ", or "kay") is a queue for Ruby and PostgreSQL that manages jobs using advisory locks, which gives it several advantages over other RDBMS-backed queues:

  • Concurrency - Workers don't block each other when trying to lock jobs, as often occurs with "SELECT FOR UPDATE"-style locking. This allows for very high throughput with a large number of workers.
  • Efficiency - Locks are held in memory, so locking a job doesn't incur a disk write. These first two points are what limit performance with other queues. Under heavy load, Que's bottleneck is CPU, not I/O.
  • Safety - If a Ruby process dies, the jobs it's working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.

Additionally, there are the general benefits of storing jobs in Postgres, alongside the rest of your data, rather than in Redis or a dedicated queue:

  • Transactional Control - Queue a job along with other changes to your database, and it'll commit or rollback with everything else. If you're using ActiveRecord or Sequel, Que can piggyback on their connections, so setup is simple and jobs are protected by the transactions you're already using.
  • Atomic Backups - Your jobs and data can be backed up together and restored as a snapshot. If your jobs relate to your data (and they usually do), there's no risk of jobs falling through the cracks during a recovery.
  • Fewer Dependencies - If you're already using Postgres (and you probably should be), a separate queue is another moving part that can break.
  • Security - Postgres' support for SSL connections keeps your data safe in transport, for added protection when you're running workers on cloud platforms that you can't completely control.

Que's primary goal is reliability. You should be able to leave your application running indefinitely without worrying about jobs being lost due to a lack of transactional support, or left in limbo due to a crashing process. Que does everything it can to ensure that jobs you queue are performed exactly once (though the occasional repetition of a job can be impossible to avoid - see the docs on how to write a reliable job).

Que's secondary goal is performance. The worker process is multithreaded, so that a single process can run many jobs simultaneously.


  • MRI Ruby 2.7+
  • PostgreSQL 9.5+
  • Rails 6.0+ (optional)

Please note - Que's job table undergoes a lot of churn when it is under high load, and like any heavily-written table, is susceptible to bloat and slowness if Postgres isn't able to clean it up. The most common cause of this is long-running transactions, so it's recommended to try to keep all transactions against the database housing Que's job table as short as possible. This is good advice to remember for any high-activity database, but bears emphasizing when using tables that undergo a lot of writes.


Add this line to your application's Gemfile:

gem 'que'

And then execute:


Or install it yourself as:

gem install que


First, create the queue schema in a migration. For example:

class CreateQueSchema < ActiveRecord::Migration[6.0]
  def up
    # Whenever you use Que in a migration, always specify the version you're
    # migrating to. If you're unsure what the current version is, check the
    # changelog.
    Que.migrate!(version: 6)

  def down
    # Migrate to version 0 to remove Que entirely.
    Que.migrate!(version: 0)

Create a class for each type of job you want to run:

# app/jobs/charge_credit_card.rb
class ChargeCreditCard < Que::Job
  # Default settings for this job. These are optional - without them, jobs
  # will default to priority 100 and run immediately.
  self.run_at = proc { 1.minute.from_now }

  # We use the Linux priority scale - a lower number is more important.
  self.priority = 10

  def run(credit_card_id, user_id:)
    # Do stuff.
    user = User.find(user_id)
    card = CreditCard.find(credit_card_id)

    User.transaction do
      # Write any changes you'd like to the database.
      user.update charged_at:

      # It's best to destroy the job in the same transaction as any other
      # changes you make. Que will mark the job as destroyed for you after the
      # run method if you don't do it yourself, but if your job writes to the DB
      # but doesn't destroy the job in the same transaction, it's possible that
      # the job could be repeated in the event of a crash.

      # If you'd rather leave the job record in the database to maintain a job
      # history, simply replace the `destroy` call with a `finish` call.

Queue your job. Again, it's best to do this in a transaction with other changes you're making. Also note that any arguments you pass will be serialized to JSON and back again, so stick to simple types (strings, integers, floats, hashes, and arrays).

CreditCard.transaction do
  # Persist credit card information
  card = CreditCard.create(params[:credit_card])
  ChargeCreditCard.enqueue(, user_id:

You can also add options to run the job after a specific time, or with a specific priority:

ChargeCreditCard.enqueue(, user_id:, job_options: { run_at:, priority: 5 })

Learn more about job options.

Running the Que Worker

In order to process jobs, you must start a separate worker process outside of your main server.

bundle exec que

Try running que -h to get a list of runtime options:

$ que -h
usage: que [options] [file/to/require] ...
    -h, --help                       Show this help text.
    -i, --poll-interval [INTERVAL]   Set maximum interval between polls for available jobs, in seconds (default: 5)

You may need to pass que a file path to require so that it can load your app. Que will automatically load config/environment.rb if it exists, so you shouldn't need an argument if you're using Rails.

Additional Rails-specific Setup

If you're using ActiveRecord to dump your database's schema, please set your schema_format to :sql so that Que's table structure is managed correctly. This is a good idea regardless, as the :ruby schema format doesn't support many of PostgreSQL's advanced features.

Pre-1.0, the default queue name needed to be configured in order for Que to work out of the box with Rails. As of 1.0 the default queue name is now 'default', as Rails expects, but when Rails enqueues some types of jobs it may try to use another queue name that isn't worked by default. You can either:

Configure Rails to send all internal job types to the 'default' queue by adding the following to config/application.rb:

config.action_mailer.deliver_later_queue_name = :default
config.action_mailbox.queues.incineration = :default
config.action_mailbox.queues.routing = :default
config.active_storage.queues.analysis = :default
config.active_storage.queues.purge = :default

Tell que to work all of these queues (less efficient because it requires polling all of them):

que -q default -q mailers -q action_mailbox_incineration -q action_mailbox_routing -q active_storage_analysis -q active_storage_purge

Also, if you would like to integrate Que with Active Job, you can do it by setting the adapter in config/application.rb or in a specific environment by setting it in config/environments/production.rb, for example:

config.active_job.queue_adapter = :que

Que will automatically use the database configuration of your rails application, so there is no need to configure anything else.

You can then write your jobs as usual following the Active Job documentation. However, be aware that you'll lose the ability to finish the job in the same transaction as other database operations. That happens because Active Job is a generic background job framework that doesn't benefit from the database integration Que provides.

If you later decide to switch a job from Active Job to Que to have transactional integrity you can easily change the corresponding job class to inherit from Que::Job and follow the usage guidelines in the previous section.


There are a couple ways to do testing. You may want to set Que::Job.run_synchronously = true, which will cause JobClass.enqueue to simply execute the job's logic synchronously, as if you'd run*your_args). Or, you may want to leave it disabled so you can assert on the job state once they are stored in the database.


For full documentation, see here.

Related Projects

These projects are tested to be compatible with Que 1.x:

  • que-web is a Sinatra-based UI for inspecting your job queue.
  • que-scheduler lets you schedule tasks using a cron style config file.
  • que-locks lets you lock around job execution for so only one job runs at once for a set of arguments.

If you have a project that uses or relates to Que, feel free to submit a PR adding it to the list!

Community and Contributing

  • For feature suggestions or bugs in the library, please feel free to open an issue.
  • For general discussion and questions/concerns that don't relate to obvious bugs, join our Discord Server.
  • For contributions, pull requests submitted via Github are welcome.

Regarding contributions, one of the project's priorities is to keep Que as simple, lightweight and dependency-free as possible, and pull requests that change too much or wouldn't be useful to the majority of Que's users have a good chance of being rejected. If you're thinking of submitting a pull request that adds a new feature, consider starting a discussion in an issue first about what it would do and how it would be implemented. If it's a sufficiently large feature, or if most of Que's users wouldn't find it useful, it may be best implemented as a standalone gem, like some of the related projects above.


A note on running specs - Que's worker system is multithreaded and therefore prone to race conditions. As such, if you've touched that code, a single spec run passing isn't a guarantee that any changes you've made haven't introduced bugs. One thing I like to do before pushing changes is rerun the specs many times and watching for hangs. You can do this from the command line with something like:

for i in {1..1000}; do SEED=$i bundle exec rake; done

This will iterate the specs one thousand times, each with a different ordering. If the specs hang, note what the seed number was on that iteration. For example, if the previous specs finished with a "Randomized with seed 328", you know that there's a hang with seed 329, and you can narrow it down to a specific spec with:

for i in {1..1000}; do LOG_SPEC=true SEED=328 bundle exec rake; done

Note that we iterate because there's no guarantee that the hang would reappear with a single additional run, so we need to rerun the specs until it reappears. The LOG_SPEC parameter will output the name and file location of each spec before it is run, so you can easily tell which spec is hanging, and you can continue narrowing things down from there.

Another helpful technique is to replace an it spec declaration with hit - this will run that particular spec 100 times during the run.

With Docker

We've provided a Dockerised environment to avoid the need to manually: install Ruby, install the gem bundle, set up Postgres, and connect to the database.

To run the specs using this environment, run:


To get a shell in the environment, run:


The Docker Compose config provides a convenient way to inject your local shell aliases into the Docker container. Simply create a file containing your alias definitions (or which sources them from other files) at ~/.docker-rc.d/.docker-bashrc, and they will be available inside the container.

Without Docker

You'll need to have Postgres running. Assuming you have it running on port 5697, with a que-test database, and a username & password of que, you can run:

DATABASE_URL=postgres://que:que@localhost:5697/que-test bundle exec rake

If you don't already have Postgres, you could use Docker Compose to run just the database:

docker compose up -d db

If you want to try a different version of Postgres, e.g. 12:


Git pre-push hook

So we can avoid breaking the build, we've created Git pre-push hooks to verify everything is ok before pushing.

To set up the pre-push hook locally, run:

echo -e "#\!/bin/bash\n\$(dirname \$0)/../../auto/pre-push-hook" > .git/hooks/pre-push
chmod +x .git/hooks/pre-push

Release process

The process for releasing a new version of the gem is:

  • Merge PR(s)
  • Git pull locally
  • Update the version number, bundle install, and commit
  • Update, and commit
  • Tag the commit with the version number, prefixed by v
  • Git push to master
  • Git push the tag
  • Publish the new version of the gem to RubyGems: gem build -o que.gem && gem push que.gem
  • Create a GitHub release - rather than describe anything there, link to the heading for the release in
  • Post on the Que Discord in #announcements

Author: que-rb
Source Code: 
License: MIT license

#ruby #job #queue #postgresql 

Que: A Ruby Job Queue That Uses PostgreSQL's Advisory Locks for Speed
Royce  Reinger

Royce Reinger


Backburner: Simple and Reliable Beanstalkd Job Queue for Ruby


Backburner is a beanstalkd-powered job queue that can handle a very high volume of jobs. You create background jobs and place them on multiple work queues to be processed later.

Processing background jobs reliably has never been easier than with beanstalkd and Backburner. This gem works with any ruby-based web framework, but is especially suited for use with Sinatra, Padrino and Rails.

If you want to use beanstalk for your job processing, consider using Backburner. Backburner is heavily inspired by Resque and DelayedJob. Backburner stores all jobs as simple JSON message payloads. Persistent queues are supported when beanstalkd persistence mode is enabled.

Backburner supports multiple queues, job priorities, delays, and timeouts. In addition, Backburner has robust support for retrying failed jobs, handling error cases, custom logging, and extensible plugin hooks.

Why Backburner?

Backburner is well tested and has a familiar, no-nonsense approach to job processing, but that is of secondary importance. Let's face it, there are a lot of options for background job processing. DelayedJob, and Resque are the first that come to mind immediately. So, how do we make sense of which one to use? And why use Backburner over other alternatives?

The key to understanding the differences lies in understanding the different projects and protocols that power these popular queue libraries under the hood. Every job queue requires a queue store that jobs are put into and pulled out of. In the case of Resque, jobs are processed through Redis, a persistent key-value store. In the case of DelayedJob, jobs are processed through ActiveRecord and a database such as PostgreSQL.

The work queue underlying these gems tells you infinitely more about the differences than anything else. Beanstalk is probably the best solution for job queues available today for many reasons. The real question then is... "Why Beanstalk?".

Why Beanstalk?

Illya has an excellent blog post Scalable Work Queues with Beanstalk and Adam Wiggins posted an excellent comparison.

You will quickly see that beanstalkd is an underrated but incredible project that is extremely well-suited as a job queue. Significantly better suited for this task than Redis or a database. Beanstalk is a simple, and a very fast work queue service rolled into a single binary - it is the memcached of work queues. Originally built to power the backend for the 'Causes' Facebook app, it is a mature and production ready open source project. PostRank uses beanstalk to reliably process millions of jobs a day.

A single instance of Beanstalk is perfectly capable of handling thousands of jobs a second (or more, depending on your job size) because it is an in-memory, event-driven system. Powered by libevent under the hood, it requires zero setup (launch and forget, à la memcached), optional log based persistence, an easily parsed ASCII protocol, and a rich set of tools for job management that go well beyond a simple FIFO work queue.

Beanstalkd supports the following features out of the box:

ParallelizedSupports multiple work queues created on demand.
ReliableBeanstalk’s reserve, work, delete cycle ensures reliable processing.
SchedulingDelay enqueuing jobs by a specified interval to schedule processing later
FastProcesses thousands of jobs per second without breaking a sweat.
PrioritiesSpecify priority so important jobs can be processed quickly.
PersistenceJobs are stored in memory for speed, but logged to disk for safe keeping.
FederationHorizontal scalability provided through federation by the client.
Error HandlingBury any job which causes an error for later debugging and inspection.

Keep in mind that these features are supported out of the box with beanstalk and require no special code within this gem to support. In the end, beanstalk is the ideal job queue while also being ridiculously easy to install and setup.


First, you probably want to install beanstalkd, which powers the job queues. Depending on your platform, this should be as simple as (for Ubuntu):

$ sudo apt-get install beanstalkd

Add this line to your application's Gemfile:

gem 'backburner'

And then execute:

$ bundle

Or install it yourself as:

$ gem install backburner


Backburner is extremely simple to setup. Just configure basic settings for backburner:

Backburner.configure do |config|
  config.beanstalk_url       = "beanstalk://"
  config.tube_namespace      = ""
  config.namespace_separator = "."
  config.on_error            = lambda { |e| puts e }
  config.max_job_retries     = 3 # default 0 retries
  config.retry_delay         = 2 # default 5 seconds
  config.retry_delay_proc    = lambda { |min_retry_delay, num_retries| min_retry_delay + (num_retries ** 3) }
  config.default_priority    = 65536
  config.respond_timeout     = 120
  config.default_worker      = Backburner::Workers::Simple
  config.logger              =
  config.primary_queue       = "backburner-jobs"
  config.priority_labels     = { :custom => 50, :useless => 1000 }
  config.reserve_timeout     = nil
  config.job_serializer_proc = lambda { |body| JSON.dump(body) }
  config.job_parser_proc     = lambda { |body| JSON.parse(body) }


The key options available are:

beanstalk_urlAddress for beanstalkd connection i.e 'beanstalk://'
tube_namespacePrefix used for all tubes related to this backburner queue.
namespace_separatorSeparator used for namespace and queue name
on_errorLambda invoked with the error whenever any job in the system fails.
max_job_retriesInteger defines how many times to retry a job before burying.
retry_delayInteger defines the base time to wait (in secs) between job retries.
retry_delay_procLambda calculates the delay used, allowing for exponential back-off.
default_priorityInteger The default priority of jobs
respond_timeoutInteger defines how long a job has to complete its task
default_workerWorker class that will be used if no other worker is specified.
loggerLogger recorded to when backburner wants to report info or errors.
primary_queuePrimary queue used for a job when an alternate queue is not given.
priority_labelsHash of named priority definitions for your app.
reserve_timeoutDuration to wait for work from a single server, or nil for forever.
job_serializer_procLambda serializes a job body to a string to write to the task
job_parser_procLambda parses a task body string to a hash

Breaking Changes

Before v0.4.0: Jobs were placed into default queues based on the name of the class creating the queue. i.e NewsletterJob would be put into a 'newsletter-job' queue. As of 0.4.0, all jobs are placed into a primary queue named "" unless otherwise specified.


Backburner allows you to create jobs and place them onto any number of beanstalk tubes, and later pull those jobs off the tubes and process them asynchronously with a worker.

Enqueuing Jobs

At the core, Backburner is about jobs that can be processed asynchronously. Jobs are simple ruby objects which respond to perform.

Job objects are queued as JSON onto a tube to be later processed by a worker. Here's an example:

class NewsletterJob
  # required
  def self.perform(email, body)
    NewsletterMailer.deliver_text_to_email(email, body)

  # optional, defaults to 'backburner-jobs' tube
  def self.queue

  # optional, defaults to default_priority
  def self.queue_priority
    1000 # most urgent priority is 0

  # optional, defaults to respond_timeout in config
  def self.queue_respond_timeout
    300 # number of seconds before job times out, 0 to avoid timeout. NB: A timeout of 1 second will likely lead to race conditions between Backburner and beanstalkd and should be avoided

  # optional, defaults to retry_delay_proc in config
  def self.queue_retry_delay_proc
    lambda { |min_retry_delay, num_retries| min_retry_delay + (num_retries ** 5) }

  # optional, defaults to retry_delay in config
  def self.queue_retry_delay

  # optional, defaults to max_job_retries in config
  def self.queue_max_job_retries

You can include the optional Backburner::Queue module so you can easily specify queue settings for this job:

class NewsletterJob
  include Backburner::Queue
  queue "newsletter-sender"  # defaults to 'backburner-jobs' tube
  queue_priority 1000 # most urgent priority is 0
  queue_respond_timeout 300 # number of seconds before job times out, 0 to avoid timeout

  def self.perform(email, body)
    NewsletterMailer.deliver_text_to_email(email, body)

Jobs can be enqueued with:

Backburner.enqueue NewsletterJob, '', 'lorem ipsum...'

Backburner.enqueue accepts first a ruby object that supports perform and then a series of parameters to that object's perform method. The queue name used by default is {namespace}.backburner-jobs unless otherwise specified.

You may also pass a lambda as the queue name and it will be evaluated when enqueuing a job (and passed the Job's class as an argument). This is especially useful when combined with "Simple Async Jobs" (see below).

Simple Async Jobs

In addition to defining custom jobs, a job can also be enqueued by invoking the async method on any object which includes Backburner::Performable. Async enqueuing works for both instance and class methods on any performable object.

class User
  include Backburner::Performable
  queue "user-jobs"  # defaults to 'user'
  queue_priority 500 # most urgent priority is 0
  queue_respond_timeout 300 # number of seconds before job times out, 0 to avoid timeout

  def activate(device_id)
    @device = Device.find(device_id)
    # ...

  def self.reset_password(user_id)
    # ...

# Async works for instance methods on a persisted object with an `id`
@user = User.first
@user.async(:ttr => 100, :queue => "activate").activate(
# ..and for class methods
User.async(:pri => 100, :delay => 10.seconds).reset_password(

This automatically enqueues a job for that user record that will run activate with the specified argument. Note that you can set the queue name and queue priority at the class level and you are also able to pass pri, ttr, delay and queue directly as options into async.

The queue name used by default is {namespace}.backburner-jobs if not otherwise specified.

If a lambda is given for queue, then it will be called and given the performable object's class as an argument:

# Given the User class above
User.async(:queue => lambda { |user_klass| ["queue1","queue2"].sample(1).first }).do_hard_work # would add the job to either queue1 or queue2 randomly

Using Async Asynchronously

It's often useful to be able to configure your app in production such that every invocation of a method is asynchronous by default as seen in delayed_job. To accomplish this, the Backburner::Performable module exposes two handle_asynchronously convenience methods which accept the same options as the async method:

class User
  include Backburner::Performable

  def send_welcome_email
    # ...

  # ---> For instance methods
  handle_asynchronously :send_welcome_email, queue: 'send-mail', pri: 5000, ttr: 60

  def self.update_recent_visitors
    # ...

  # ---> For class methods
  handle_static_asynchronously :update_recent_visitors, queue: 'long-tasks', ttr: 300

Now, all calls to User.update_recent_visitors or User#send_welcome_email will automatically be handled asynchronously when invoked. Similarly, you can call these methods directly on the Backburner::Performable module to apply async behavior outside the class:

# Given the User class above
Backburner::Performable.handle_asynchronously(User, :activate, ttr: 100, queue: 'activate')

Now all calls to the activate method on a User instance will be async with the provided options.

A Note About Auto-Async

Because an async proxy is injected and used in place of the original method, you must not rely on the return value of the method. Using the example User class above, if my send_welcome_email returned the status of an email submission and I relied on that to take some further action, I will be surprised after rewiring things with handle_asynchronously because the async proxy actually returns the (boolean) result of Backburner::Worker.enqueue.

Working Jobs

Backburner workers are processes that run forever handling jobs that are reserved from the queue. Starting a worker in ruby code is simple:

This will process jobs in all queues but you can also restrict processing to specific queues:'newsletter-sender', 'push-notifier')

The Backburner worker also exists as a rake task:

require 'backburner/tasks'

so you can run:

$ QUEUE=newsletter-sender,push-notifier rake backburner:work

You can also run the backburner binary for a convenient worker:

bundle exec backburner -q newsletter-sender,push-notifier -d -P /var/run/ -l /var/log/backburner.log

This will daemonize the worker and store the pid and logs automatically. For Rails and Padrino, the environment should load automatically. For other cases, use the -r flag to specify a file to require.

Delaying Jobs

In Backburner, jobs can be delayed by specifying the delay option whenever you enqueue a job. If you want to schedule a job for an hour from now, simply add that option while enqueuing the standard job:

Backburner::Worker.enqueue(NewsletterJob, ['', 'lorem ipsum...'], :delay => 1.hour)

or while you schedule an async method call:

User.async(:delay => 1.hour).reset_password(

Backburner will take care of the rest!


Jobs are persisted to queues as JSON objects. Let's take our User example from above. We'll run the following code to create a job:


The following JSON will be put on the {namespace}.backburner-jobs queue:

    'class': 'User',
    'args': [nil, 'reset_password', 123]

The first argument is the 'id' of the object in the case of an instance method being async'ed. For example:

@device = Device.find(987)
@user = User.find(246)

would be stored as:

    'class': 'User',
    'args': [246, 'activate', 987]

Since all jobs are persisted in JSON, your jobs must only accept arguments that can be encoded into that format. This is why our examples use object IDs instead of passing around objects.

Named Priorities

As of v0.4.0, Backburner has support for named priorities. beanstalkd priorities are numerical but backburner supports a mapping between a word and a numerical value. The following priorities are available by default: high is 0, medium is 100, and low is 200.

Priorities can be customized with:

Backburner.configure do |config|
  config.priority_labels = { :custom => 50, :useful => 5 }
  # or append to default priorities with
  # config.priority_labels  = Backburner::Configuration::PRIORITY_LABELS.merge(:foo => 5)

and then these aliases can be used anywhere that a numerical value can:

Backburner::Worker.enqueue NewsletterJob, ["foo", "bar"], :pri => :custom
User.async(:pri => :useful, :delay => 10.seconds).reset_password(

Using named priorities can greatly simplify priority management.

Processing Strategies

In Backburner, there are several different strategies for processing jobs which are reflected by multiple worker subclasses. Custom workers can be defined fairly easily. By default, Backburner comes with the following workers built-in:

Backburner::Workers::SimpleSingle threaded, no forking worker. Simplest option.
Backburner::Workers::ForkingBasic forking worker that manages crashes and memory bloat.
Backburner::Workers::ThreadsOnForkForking worker that utilizes threads for concurrent processing.
Backburner::Workers::ThreadingUtilizes thread pools for concurrent processing.

You can select the default worker for processing with:

Backburner.configure do |config|
  config.default_worker = Backburner::Workers::Forking

or determine the worker on the fly when invoking work:'newsletter-sender', :worker => Backburner::Workers::ThreadsOnFork)

or through associated rake tasks with:

$ QUEUE=newsletter-sender,push-message THREADS=2 GARBAGE=1000 rake backburner:threads_on_fork:work

When running on MRI or another Ruby implementation with a Global Interpreter Lock (GIL), do not be surprised if you're unable to saturate multiple cores, even with the threads_on_fork worker. To utilize multiple cores, you must run multiple worker processes.

Additional concurrency strategies will hopefully be contributed in the future. If you are interested in helping out, please let us know.

More info: Threads on Fork Worker

For more information on the threads_on_fork worker, check out the ThreadsOnFork Worker documentation. Please note that the ThreadsOnFork worker does not work on Windows due to its lack of fork.

More info: Threading Worker (thread-pool-based)

Configuration options for the Threading worker are similar to the threads_on_fork worker, sans the garbage option. When running via the backburner CLI, it's simplest to provide the queue names and maximum number of threads in the format "{queue name}:{max threads in pool}[,{name}:{threads}]":

$ bundle exec backburner -q queue1:4,queue2:4  # and then other options, like environment, pidfile, app root, etc. See docs for the CLI

Default Queues

Workers can be easily restricted to processing only a specific set of queues as shown above. However, if you want a worker to process all queues instead, then you can leave the queue list blank.

When you execute a worker without any queues specified, queues for known job queue class with include Backburner::Queue will be processed. To access the list of known queue classes, you can use:

# => [NewsletterJob, SomeOtherJob]

Dynamic queues created by passing queue options will not be processed by a default worker. For this reason, you may want to take control over the default list of queues processed when none are specified. To do this, you can use the default_queues class method:

Backburner.default_queues.concat(["foo", "bar"])

This will ensure that the foo and bar queues are processed by any default workers. You can also add job queue names with:

Backburner.default_queues << NewsletterJob.queue

The default_queues stores the specific list of queues that should be processed by default by a worker.


When a job fails in backburner (usually because an exception was raised), the job will be released and retried again until the max_job_retries configuration is reached.

Backburner.configure do |config|
  config.max_job_retries  = 3 # retry jobs 3 times
  config.retry_delay      = 2 # wait 2 seconds in between retries

Note the default max_job_retries is 0, meaning that by default jobs are not retried.

As jobs are retried, a progressively-increasing delay is added to give time for transient problems to resolve themselves. This may be configured using retry_delay_proc. It expects an object that responds to #call and receives the value of retry_delay and the number of times the job has been retried already. The default is a cubic back-off, eg:

Backburner.configure do |config|
  config.retry_delay      = 2 # The minimum number of seconds a retry will be delayed
  config.retry_delay_proc = lambda { |min_retry_delay, num_retries| min_retry_delay + (num_retries ** 3) }

If continued retry attempts fail, the job will be buried and can be 'kicked' later for inspection.

You can also setup a custom error handler for jobs using configure:

Backburner.configure do |config|
  config.on_error = lambda { |ex| Airbrake.notify(ex) }

Now all backburner queue errors will appear on airbrake for deeper inspection.

If you wish to retry a job without logging an error (for example when handling transient issues in a cloud or service oriented environment), simply raise a Backburner::Job::RetryJob error.


Logging in backburner is rather simple. When a job is run, the log records that. When a job fails, the log records that. When any exceptions occur during processing, the log records that.

By default, the log will print to standard out. You can customize the log to output to any standard logger by controlling the configuration option:

Backburner.configure do |config|
  config.logger =

Be sure to check logs whenever things do not seem to be processing.


Backburner is highly extensible and can be tailored to your needs by using various hooks that can be triggered across the job processing lifecycle. Often using hooks is much easier then trying to monkey patch the externals.

Check out for a detailed overview on using hooks.

Workers in Production

Once you have Backburner setup in your application, starting workers is really easy. Once beanstalkd is installed, your best bet is to use the built-in rake task that comes with Backburner. Simply add the task to your Rakefile:

# Rakefile
require 'backburner/tasks'

and then you can start the rake task with:

$ rake backburner:work
$ QUEUE=newsletter-sender,push-notifier rake backburner:work

The best way to deploy these rake tasks is using a monitoring library. We suggest God which watches processes and ensures their stability. A simple God recipe for Backburner can be found in examples/god.

Command-Line Interface

Instead of using the Rake tasks, you can use Backburner's command-line interface (CLI) – powered by the Dante gem – to launch daemonized workers. Several flags are available to control the process. Many of these are provided by Dante itself, such as flags for logging (-l), the process' PID (-P), whether to daemonize (-d) or kill a running process (-k). Backburner provides a few more:

Queues (-q)

Control which queues the worker will watch with the -q flag. Comma-separate multiple queue names and, if you're using the ThreadsOnFork worker, colon-separate the settings for thread limit, garbage limit and retries limit (eg. send_mail:4:10:3). See its wiki page for some more details.

backburner -q send_mail,create_thumbnail # You may need to use `bundle exec`

Boot an app (-r)

Load an app with the -r flag. Backburner supports automatic loading for both Rails and Padrino apps when started from the their root folder. However, you may point to a specific app's root using this flag, which is very useful when running workers from a service script.

backburner -r "$path"

Load an environment (-e)

Use the -e flag to control which environment your app should use:

backburner -e $environment


In Backburner, if the beanstalkd connection is temporarily severed, several retries to establish the connection will be attempted. After several retries, if the connection is still not able to be made, a Beaneater::NotConnected exception will be raised. You can manually catch this exception, and attempt another manual retry using Backburner::Worker.retry_connection!.

Web Front-end

Be sure to check out the Sinatra-powered project beanstalkd_view by denniskuczynski which provides an excellent overview of the tubes and jobs processed by your beanstalk workers. An excellent addition to your Backburner setup.



  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request


The code in this project has been made in light of a few excellent projects:

Thanks to these projects for inspiration and certain design and implementation decisions.


Author: Nesquena
Source Code: 
License: MIT license

#ruby #job #queue 

Backburner: Simple and Reliable Beanstalkd Job Queue for Ruby
Royce  Reinger

Royce Reinger


Delayed_job: Database Backed Asynchronous Priority Queue


Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.

It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:

  • sending massive newsletters
  • image resizing
  • http downloads
  • updating smart collections
  • updating solr, our search server, after product changes
  • batch imports
  • spam checks


The library evolves around a delayed_jobs table which can be created by using:

  script/generate delayed_job


The created table looks as follows:

  create_table :delayed_jobs, :force => true do |table|
    table.integer  :priority, :default => 0      # Allows some jobs to jump to the front of the queue
    table.integer  :attempts, :default => 0      # Provides for retries, but still fail eventually.
    table.text     :handler                      # YAML-encoded string of the object that will do work
    table.string   :last_error                   # reason for last failure (See Note below)
    table.datetime :run_at                       # When to run. Could be for immediately, or sometime in the future.
    table.datetime :locked_at                    # Set when a client is working on this object
    table.datetime :failed_at                    # Set when all retries have failed (actually, by default, the record is deleted instead)
    table.string   :locked_by                    # Who is working on this object (if locked)


On failure, the job is scheduled again in 5 seconds + N ** 4, where N is the number of retries.

The default MAX_ATTEMPTS is 25. After this, the job either deleted (default), or left in the database with “failed_at” set.
With the default of 25 attempts, the last retry will be 20 days later, with the last interval being almost 100 hours.

The default MAX_RUN_TIME is 4.hours. If your job takes longer than that, another computer could pick it up. It’s up to you to
make sure your job doesn’t exceed this time. You should set this to the longest time you think the job could take.

By default, it will delete failed jobs (and it always deletes successful jobs). If you want to keep failed jobs, set
Delayed::Job.destroy_failed_jobs = false. The failed jobs will be marked with non-null failed_at.

Here is an example of changing job parameters in Rails:

  # config/initializers/delayed_job_config.rb
  Delayed::Job.destroy_failed_jobs = false
  silence_warnings do
    Delayed::Job.const_set("MAX_ATTEMPTS", 3)
    Delayed::Job.const_set("MAX_RUN_TIME", 5.minutes)

Note: If your error messages are long, consider changing last_error field to a :text instead of a :string (255 character limit).


Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table.
Job objects are serialized to yaml so that they can later be resurrected by the job runner.

  class NewsletterJob <, :emails)
    def perform
      emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
  Delayed::Job.enqueue'lorem ipsum...', Customers.find(:all).collect(&:email))

There is also a second way to get jobs in the queue: send_later., massive_csv)

This will simply create a Delayed::PerformableMethod job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects
which are stored as their text representation and loaded from the database fresh when the job is actually run later.

Running the jobs

You can invoke rake jobs:work which will start working off jobs. You can cancel the rake task with CTRL-C.

You can also run by writing a simple script/job_runner, and invoking it externally:

  #!/usr/bin/env ruby
  require File.dirname(__FILE__) + '/../config/environment'  

Workers can be running on any computer, as long as they have access to the database and their clock is in sync. You can even
run multiple workers on per computer, but you must give each one a unique name:

  3.times do |n|
    worker = = 'worker-' + n.to_s

Keep in mind that each worker will check the database at least every 5 seconds.

Note: The rake task will exit if the database has any network connectivity problems.

Cleaning up

You can invoke rake jobs:clear to delete all jobs in the queue.


  • 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
  • 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
  • 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
  • 1.2.0: Added send_later to Object for simpler job creation
  • 1.0.0: Initial release

Author: Tobi
Source Code: 
License: MIT license

#ruby #database #job 

Delayed_job: Database Backed Asynchronous Priority Queue
Royce  Reinger

Royce Reinger


Resque: A Redis-backed Ruby Library for Creating Background Jobs



Resque (pronounced like "rescue") is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and processing them later.

For the backstory, philosophy, and history of Resque's beginnings, please see the blog post (2009).

Background jobs can be any Ruby class or module that responds to perform. Your existing classes can easily be converted to background jobs or you can create new classes specifically to do work. Or, you can do both.

Resque is heavily inspired by DelayedJob (which rocks) and comprises three parts:

  1. A Ruby library for creating, querying, and processing jobs
  2. A Rake task for starting a worker which processes jobs
  3. A Sinatra app for monitoring queues, jobs, and workers.

Resque workers can be given multiple queues (a "queue list"), distributed between multiple machines, run anywhere with network access to the Redis server, support priorities, are resilient to memory bloat / "leaks," tell you what they're doing, and expect failure.

Resque queues are persistent; support constant time, atomic push and pop (thanks to Redis); provide visibility into their contents; and store jobs as simple JSON packages.

The Resque frontend tells you what workers are doing, what workers are not doing, what queues you're using, what's in those queues, provides general usage stats, and helps you track failures.

Resque now supports Ruby 2.3.0 and above. We will also only be supporting Redis 3.0 and above going forward.

Note on the future of Resque

Would you like to be involved in Resque? Do you have thoughts about what Resque should be and do going forward? There's currently an open discussion here on just that topic, so please feel free to join in. We'd love to hear your thoughts and/or have people volunteer to be a part of the project!


Resque jobs are Ruby classes (or modules) which respond to the perform method. Here's an example:

class Archive
  @queue = :file_serve

  def self.perform(repo_id, branch = 'master')
    repo = Repository.find(repo_id)

The @queue class instance variable determines which queue Archive jobs will be placed in. Queues are arbitrary and created on the fly - you can name them whatever you want and have as many as you want.

To place an Archive job on the file_serve queue, we might add this to our application's pre-existing Repository class:

class Repository
  def async_create_archive(branch)
    Resque.enqueue(Archive,, branch)

Now when we call repo.async_create_archive('masterbrew') in our application, a job will be created and placed on the file_serve queue.

Later, a worker will run something like this code to process the job:

klass, args = Resque.reserve(:file_serve)
klass.perform(*args) if klass.respond_to? :perform

Which translates to:

Archive.perform(44, 'masterbrew')

Let's start a worker to run file_serve jobs:

$ cd app_root
$ QUEUE=file_serve rake resque:work

This starts one Resque worker and tells it to work off the file_serve queue. As soon as it's ready it'll try to run the Resque.reserve code snippet above and process jobs until it can't find any more, at which point it will sleep for a small period and repeatedly poll the queue for more jobs.


Add the gem to your Gemfile:

gem 'resque'

Next, install it with Bundler:

$ bundle


In your Rakefile, or some other file in lib/tasks (ex: lib/tasks/resque.rake), load the resque rake tasks:

require 'resque' require 'resque/tasks' require 'your/app' # Include this line if you want your workers to have access to your application


To make resque specific changes, you can override the resque:setup job in lib/tasks (ex: lib/tasks/resque.rake). GitHub's setup task looks like this:

task "resque:setup" => :environment do
  Grit::Git.git_timeout = 10.minutes

We don't want the git_timeout as high as 10 minutes in our web app, but in the Resque workers it's fine.

Running Workers

Resque workers are rake tasks that run forever. They basically do this:

loop do
  if job = reserve
    sleep 5 # Polling frequency = 5

Starting a worker is simple:

$ QUEUE=* rake resque:work

Or, you can start multiple workers:

$ COUNT=2 QUEUE=* rake resque:workers

This will spawn two Resque workers, each in its own process. Hitting ctrl-c should be sufficient to stop them all.

Priorities and Queue Lists

Resque doesn't support numeric priorities but instead uses the order of queues you give it. We call this list of queues the "queue list."

Let's say we add a warm_cache queue in addition to our file_serve queue. We'd now start a worker like so:

$ QUEUES=file_serve,warm_cache rake resque:work

When the worker looks for new jobs, it will first check file_serve. If it finds a job, it'll process it then check file_serve again. It will keep checking file_serve until no more jobs are available. At that point, it will check warm_cache. If it finds a job it'll process it then check file_serve (repeating the whole process).

In this way you can prioritize certain queues. At GitHub we start our workers with something like this:

$ QUEUES=critical,archive,high,low rake resque:work

Notice the archive queue - it is specialized and in our future architecture will only be run from a single machine.

At that point we'll start workers on our generalized background machines with this command:

$ QUEUES=critical,high,low rake resque:work

And workers on our specialized archive machine with this command:

$ QUEUE=archive rake resque:work

Running All Queues

If you want your workers to work off of every queue, including new queues created on the fly, you can use a splat:

$ QUEUE=* rake resque:work

Queues will be processed in alphabetical order.

Or, prioritize some queues above *:

# QUEUE=critical,* rake resque:work

Running All Queues Except for Some

If you want your workers to work off of all queues except for some, you can use negation:

$ QUEUE=*,!low rake resque:work

Negated globs also work. The following will instruct workers to work off of all queues except those beginning with file_:

$ QUEUE=*,!file_* rake resque:work

Note that the order in which negated queues are specified does not matter, so QUEUE=*,!file_* and QUEUE=!file_*,* will have the same effect.

Process IDs (PIDs)

There are scenarios where it's helpful to record the PID of a resque worker process. Use the PIDFILE option for easy access to the PID:

$ PIDFILE=./ QUEUE=file_serve rake resque:work

Running in the background

There are scenarios where it's helpful for the resque worker to run itself in the background (usually in combination with PIDFILE). Use the BACKGROUND option so that rake will return as soon as the worker is started.

$ PIDFILE=./ BACKGROUND=yes QUEUE=file_serve rake resque:work

Polling frequency

You can pass an INTERVAL option which is a float representing the polling frequency. The default is 5 seconds, but for a semi-active app you may want to use a smaller value.

$ INTERVAL=0.1 QUEUE=file_serve rake resque:work

The Front End

Resque comes with a Sinatra-based front end for seeing what's up with your queue.

The Front End


If you've installed Resque as a gem running the front end standalone is easy:

$ resque-web

It's a thin layer around rackup so it's configurable as well:

$ resque-web -p 8282

If you have a Resque config file you want evaluated just pass it to the script as the final argument:

$ resque-web -p 8282 rails_root/config/initializers/resque.rb

You can also set the namespace directly using resque-web:

$ resque-web -p 8282 -N myapp

or set the Redis connection string if you need to do something like select a different database:

$ resque-web -p 8282 -r localhost:6379:2


Using Passenger? Resque ships with a you can use. See Phusion's guide:

Apache: Nginx:


If you want to load Resque on a subpath, possibly alongside other apps, it's easy to do with Rack's URLMap:

require 'resque/server'

run \
  "/"       =>,
  "/resque" =>

Check examples/demo/ for a functional example (including HTTP basic auth).


You can also mount Resque on a subpath in your existing Rails app by adding require 'resque/server' to the top of your routes file or in an initializer then adding this to routes.rb:

mount, :at => "/resque"


What should you run in the background? Anything that takes any time at all. Slow INSERT statements, disk manipulating, data processing, etc.

At GitHub we use Resque to process the following types of jobs:

  • Warming caches
  • Counting disk usage
  • Building tarballs
  • Building Rubygems
  • Firing off web hooks
  • Creating events in the db and pre-caching them
  • Building graphs
  • Deleting users
  • Updating our search index

As of writing we have about 35 different types of background jobs.

Keep in mind that you don't need a web app to use Resque - we just mention "foreground" and "background" because they make conceptual sense. You could easily be spidering sites and sticking data which needs to be crunched later into a queue.


Jobs are persisted to queues as JSON objects. Let's take our Archive example from above. We'll run the following code to create a job:

repo = Repository.find(44)

The following JSON will be stored in the file_serve queue:

    'class': 'Archive',
    'args': [ 44, 'masterbrew' ]

Because of this your jobs must only accept arguments that can be JSON encoded.

So instead of doing this:

Resque.enqueue(Archive, self, branch)

do this:

Resque.enqueue(Archive,, branch)

This is why our above example (and all the examples in examples/) uses object IDs instead of passing around the objects.

While this is less convenient than just sticking a marshaled object in the database, it gives you a slight advantage: your jobs will be run against the most recent version of an object because they need to pull from the DB or cache.

If your jobs were run against marshaled objects, they could potentially be operating on a stale record with out-of-date information.

send_later / async

Want something like DelayedJob's send_later or the ability to use instance methods instead of just methods for jobs? See the examples/ directory for goodies.

We plan to provide first class async support in a future release.


If a job raises an exception, it is logged and handed off to the Resque::Failure module. Failures are logged either locally in Redis or using some different backend. To see exceptions while developing, see details below under Logging.

For example, Resque ships with Airbrake support. To configure it, put the following into an initialisation file or into your rake job:

# send errors which occur in background jobs to redis and airbrake
require 'resque/failure/multiple'
require 'resque/failure/redis'
require 'resque/failure/airbrake'

Resque::Failure::Multiple.classes = [Resque::Failure::Redis, Resque::Failure::Airbrake]
Resque::Failure.backend = Resque::Failure::Multiple

Keep this in mind when writing your jobs: you may want to throw exceptions you would not normally throw in order to assist debugging.

Rails example

If you are using ActiveJob here's how your job definition will look:

class ArchiveJob < ApplicationJob
  queue_as :file_serve

  def perform(repo_id, branch = 'master')
    repo = Repository.find(repo_id)
class Repository
  def async_create_archive(branch)
    ArchiveJob.perform_later(, branch)

It is important to run ArchiveJob.perform_later(, branch) rather than Resque.enqueue(Archive,, branch). Otherwise Resque will process the job without actually doing anything. Even if you put an obviously buggy line like 0/0 in the perform method, the job will still succeed.



You may want to change the Redis host and port Resque connects to, or set various other options at startup.

Resque has a redis setter which can be given a string or a Redis object. This means if you're already using Redis in your app, Resque can re-use the existing connection.

String: Resque.redis = 'localhost:6379'

Redis: Resque.redis = $redis

For our rails app we have a config/initializers/resque.rb file where we load config/resque.yml by hand and set the Redis information appropriately.

Here's our config/resque.yml:

development: localhost:6379
test: localhost:6379
fi: localhost:6379
production: <%= ENV['REDIS_URL'] %>

And our initializer:

rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
config_file = rails_root + '/config/resque.yml'

resque_config = YAML::load(
Resque.redis = resque_config[rails_env]

Easy peasy! Why not just use RAILS_ROOT and RAILS_ENV? Because this way we can tell our Sinatra app about the config file:

$ RAILS_ENV=production resque-web rails_root/config/initializers/resque.rb

Now everyone is on the same page.

Also, you could disable jobs queueing by setting 'inline' attribute. For example, if you want to run all jobs in the same process for cucumber, try:

Resque.inline = ENV['RAILS_ENV'] == "cucumber"


Workers support basic logging to STDOUT.

You can control the logging threshold using Resque.logger.level:

# config/initializers/resque.rb
Resque.logger.level = Logger::DEBUG

If you want Resque to log to a file, in Rails do:

# config/initializers/resque.rb
Resque.logger ='log', "#{Rails.env}_resque.log"))


If you're running multiple, separate instances of Resque you may want to namespace the keyspaces so they do not overlap. This is not unlike the approach taken by many memcached clients.

This feature is provided by the redis-namespace library, which Resque uses by default to separate the keys it manages from other keys in your Redis server.

Simply use the Resque.redis.namespace accessor:

Resque.redis.namespace = "resque:GitHub"

We recommend sticking this in your initializer somewhere after Redis is configured.

Storing Statistics

Resque allows to store count of processed and failed jobs.

By default it will store it in Redis using the keys stats:processed and stats:failed.

Some apps would want another stats store, or even a null store:

# config/initializers/resque.rb
class NullDataStore
 def stat(stat)

 def increment_stat(stat, by)

 def decrement_stat(stat, by)

 def clear_stat(stat)

Resque.stat_data_store =

Plugins and Hooks

For a list of available plugins see

If you'd like to write your own plugin, or want to customize Resque using hooks (such as Resque.after_fork), see docs/

Additional Information

Resque vs DelayedJob

How does Resque compare to DelayedJob, and why would you choose one over the other?

  • Resque supports multiple queues
  • DelayedJob supports finer grained priorities
  • Resque workers are resilient to memory leaks / bloat
  • DelayedJob workers are extremely simple and easy to modify
  • Resque requires Redis
  • DelayedJob requires ActiveRecord
  • Resque can only place JSONable Ruby objects on a queue as arguments
  • DelayedJob can place any Ruby object on its queue as arguments
  • Resque includes a Sinatra app for monitoring what's going on
  • DelayedJob can be queried from within your Rails app if you want to add an interface

If you're doing Rails development, you already have a database and ActiveRecord. DelayedJob is super easy to setup and works great. GitHub used it for many months to process almost 200 million jobs.

Choose Resque if:

  • You need multiple queues
  • You don't care / dislike numeric priorities
  • You don't need to persist every Ruby object ever
  • You have potentially huge queues
  • You want to see what's going on
  • You expect a lot of failure / chaos
  • You can setup Redis
  • You're not running short on RAM

Choose DelayedJob if:

  • You like numeric priorities
  • You're not doing a gigantic amount of jobs each day
  • Your queue stays small and nimble
  • There is not a lot failure / chaos
  • You want to easily throw anything on the queue
  • You don't want to setup Redis

In no way is Resque a "better" DelayedJob, so make sure you pick the tool that's best for your app.


On certain platforms, when a Resque worker reserves a job it immediately forks a child process. The child processes the job then exits. When the child has exited successfully, the worker reserves another job and repeats the process.


Because Resque assumes chaos.

Resque assumes your background workers will lock up, run too long, or have unwanted memory growth.

If Resque workers processed jobs themselves, it'd be hard to whip them into shape. Let's say one is using too much memory: you send it a signal that says "shutdown after you finish processing the current job," and it does so. It then starts up again - loading your entire application environment. This adds useless CPU cycles and causes a delay in queue processing.

Plus, what if it's using too much memory and has stopped responding to signals?

Thanks to Resque's parent / child architecture, jobs that use too much memory release that memory upon completion. No unwanted growth.

And what if a job is running too long? You'd need to kill -9 it then start the worker again. With Resque's parent / child architecture you can tell the parent to forcefully kill the child then immediately start processing more jobs. No startup delay or wasted cycles.

The parent / child architecture helps us keep tabs on what workers are doing, too. By eliminating the need to kill -9 workers we can have parents remove themselves from the global listing of workers. If we just ruthlessly killed workers, we'd need a separate watchdog process to add and remove them to the global listing - which becomes complicated.

Workers instead handle their own state.

at_exit Callbacks

Resque uses Kernel#exit! for exiting its workers' child processes. So any at_exit callback defined in your application won't be executed when the job is finished and the child process exits.

You can alter this behavior by setting the RUN_AT_EXIT_HOOKS environment variable.

Parents and Children

Here's a parent / child pair doing some work:

$ ps -e -o pid,command | grep [r]esque
92099 resque: Forked 92102 at 1253142769
92102 resque: Processing file_serve since 1253142769

You can clearly see that process 92099 forked 92102, which has been working since 1253142769.

(By advertising the time they began processing you can easily use monit or god to kill stale workers.)

When a parent process is idle, it lets you know what queues it is waiting for work on:

$ ps -e -o pid,command | grep [r]esque
92099 resque: Waiting for file_serve,warm_cache


Resque workers respond to a few different signals:

  • QUIT - Wait for child to finish processing then exit
  • TERM / INT - Immediately kill child then exit
  • USR1 - Immediately kill child but don't exit
  • USR2 - Don't start to process any new jobs
  • CONT - Start to process new jobs again after a USR2

If you want to gracefully shutdown a Resque worker, use QUIT.

If you want to kill a stale or stuck child, use USR1. Processing will continue as normal unless the child was not found. In that case Resque assumes the parent process is in a bad state and shuts down.

If you want to kill a stale or stuck child and shutdown, use TERM

If you want to stop processing jobs, but want to leave the worker running (for example, to temporarily alleviate load), use USR2 to stop processing, then CONT to start it again. It's also possible to pause all workers.


When shutting down processes, Heroku sends every process a TERM signal at the same time. By default this causes an immediate shutdown of any running job leading to frequent Resque::TermException errors. For short running jobs, a simple solution is to give a small amount of time for the job to finish before killing it.

Resque doesn't handle this out of the box (for both cedar-14 and heroku-16), you need to install the resque-heroku-signals addon which adds the required signal handling to make the behavior described above work. Related issue: #1559

To accomplish this set the following environment variables:

RESQUE_PRE_SHUTDOWN_TIMEOUT - The time between the parent receiving a shutdown signal (TERM by default) and it sending that signal on to the child process. Designed to give the child process time to complete before being forced to die.

TERM_CHILD - Must be set for RESQUE_PRE_SHUTDOWN_TIMEOUT to be used. After the timeout, if the child is still running it will raise a Resque::TermException and exit.

RESQUE_TERM_TIMEOUT - By default you have a few seconds to handle Resque::TermException in your job. RESQUE_TERM_TIMEOUT and RESQUE_PRE_SHUTDOWN_TIMEOUT must be lower than the heroku dyno timeout.

Pausing all workers

Workers will not process pending jobs if the Redis key pause-all-workers is set with the string value "true".

Resque.redis.set('pause-all-workers', 'true')

Nothing happens to jobs that are already being processed by workers.

Unpause by removing the Redis key pause-all-workers.




If you're using god to monitor Resque, we have provided example configs in examples/god/. One is for starting / stopping workers, the other is for killing workers that have been running too long.


If you're using monit, examples/monit/resque.monit is provided free of charge. This is not used by GitHub in production, so please send patches for any tweaks or improvements you can make to it.

Mysql::Error: MySQL server has gone away

If your workers remain idle for too long they may lose their MySQL connection. Depending on your version of Rails, we recommend the following:


In your perform method, add the following line:

class MyTask
  def self.perform
    # rest of your code

The Rails doc says the following about verify_active_connections!:

Verify active connections and remove and disconnect connections associated with stale threads.

Rails 4.x

In your perform method, instead of verify_active_connections!, use:

class MyTask
  def self.perform
    # rest of your code

From the Rails docs on clear_active_connections!:

Returns any connections in use by the current thread back to the pool, and also returns connections to the pool cached by threads that are no longer alive.


Want to hack on Resque?

First clone the repo and run the tests:

git clone git://
cd resque
rake test

If the tests do not pass make sure you have Redis installed correctly (though we make an effort to tell you if we feel this is the case). The tests attempt to start an isolated instance of Redis to run against.

Also make sure you've installed all the dependencies correctly. For example, try loading the redis-namespace gem after you've installed it:

$ irb
>> require 'rubygems'
=> true
>> require 'redis/namespace'
=> true

If you get an error requiring any of the dependencies, you may have failed to install them or be seeing load path issues.


Resque ships with a demo Sinatra app for creating jobs that are later processed in the background.

Try it out by looking at the README, found at examples/demo/README.markdown.


Read first.

Once you've made your great commits:

  1. Fork Resque
  2. Create a topic branch - git checkout -b my_branch
  3. Push to your branch - git push origin my_branch
  4. Create a Pull Request from your branch


Please add them to the FAQ or open an issue on this repo.


This project uses Semantic Versioning

Author: Resque
Source Code: 
License: MIT license

#ruby #queue #job

Resque: A Redis-backed Ruby Library for Creating Background Jobs

World Web Technology has Multiple Job Vacancies for Multiple Positions

Are you looking for a company that gives you the chance to explore your skills and assures you rapid growth of your career?

World Web Technology Pvt. Ltd. has Multiple vacancies for multiple positions. So, ✊ Grab the opportunity to work with the best company in Ahmedabad. 😍

Interested candidates just drop an email to 📩 𝗢𝗥 apply online at

#reference are highly appreciated.

#job #jobs

World Web Technology has Multiple Job Vacancies for Multiple Positions

Cómo Prepararse Para Un Trabajo En Tecnología Para Principiantes

Hace unos meses, me invitaron a hablar con un grupo de estudiantes de ingeniería de una universidad de renombre en la India. Se suponía que debía interactuar con ellos, motivarlos y finalmente decirles: "¿Cómo es la industria (el mundo del "trabajo")?", "¿Cómo puedes prepararte para ello?"

Después de pasar más de 15 años desarrollando software, lanzando productos, gestionando equipos, clientes y expectativas, tenía un montón de pensamientos para compartir con la generación más joven.

Afortunadamente, pude resumir todo en ocho puntos de alto nivel sin que mi audiencia se aburriera.

Compartiré esos puntos en este artículo para ayudarlo a prepararse mejor para las próximas oportunidades y desafíos. Todos los puntos mencionados en el artículo se aplican a todos, independientemente de su experiencia actual en la industria.

Siempre que menciono el término "industria" en este artículo, me refiero a la "industria del software", ya que mi experiencia se relaciona directamente con ella. ¡Feliz lectura!

Hay tres tipos de personas en la industria

Podemos clasificar a las personas que trabajan en la industria del software en tres grupos principales.


  • Próximo : Personas que necesitan orientación profesional y un camino definido para lograr sus objetivos profesionales. Están buscando comentarios y validación de personas que ya están haciendo lo necesario para crecer en la industria.
  • Hacer : este conjunto de personas ya está haciendo las cosas necesarias para crecer en la industria. Se quedan relevantcon las últimas y mejores cosas que hay. Perfeccionan su skillsperiódicamente y ayudan a sus seguidores a crecer compartiendo conocimientos e información. Hay menos personas en esta categoría que en la Followingcategoría.
  • Hacer + ¿Qué sigue?: Este grupo de personas no solo está haciendo cosas, sino también creando especialidades para el futuro. Cultivan visiones   what's next?y trabajan para lograrlo con mucho passion. Sus esfuerzos no necesariamente tienen que resultar en un resultado extraordinario, pero siguen intentándolo. Nuevamente, hay menos personas en esta categoría de las que discutimos anteriormente en la Doingcategoría.

Tenga en cuenta que estas categorías no determinan quién es senior o junior en la industria o la organización. En cambio, estas categorías existen en todos los grados, niveles y funciones laborales.

Además, lo emocionante es que una sola persona puede desempeñar su papel en las tres categorías según la situación, la habilidad y el contexto.

Por ejemplo, la Sra. X es doingexcelente en tecnologías de desarrollo web, resolución de problemas y creación de herramientas para ayudar en el futuro. Ahora está comenzando su viaje de blogs para compartir su conocimiento ampliamente. Ella está aprendiendo de la comunidad de blogs de tecnología de followingbloggers establecidos.


Entonces, ¿cómo nos aseguramos de construir constantemente nuestra presencia en estas categorías y mover la aguja para entrar en las fases Doingy ?Doing+ What's Next

8 consejos para ayudarlo a avanzar en su carrera de codificación

Sí, quiero resumir mis consejos en ocho puntos cruciales en los que centrarme. Es posible que ya esté haciendo algunos o todos estos o que no haya comenzado con ellos. De cualquier manera está bien, y espero que te anime a dar un paso más a partir de aquí.

1. Desarrolla hábitos


Nuestra habitsunidad nos en nuestra vida. Construimos muchos de ellos sin saberlo, y tenemos que construir algunos conscientemente.

Un buen hábito lo ayuda a desarrollar la actitud correcta para resolver problemas, manejar situaciones desafiantes y tomar mejores decisiones. Te ayuda a establecer objetivos racionales y acercarte a ellos. Las personas con buenos hábitos son organizadas, reflexivas, accesibles y tienen una mentalidad positiva.

Entonces, ¿cuáles son algunos de los buenos hábitos? Hay muchos, y aquí hay algunos básicos.

  • Leer
  • Escribir, tomar notas.
  • Ejercicio físico
  • Establecer un horario
  • Organizarse
  • ahorrar dinero
  • Aprendizaje

Construye hábitos, buenos. Preparará el escenario para que usted decida entre lo bueno y lo malo, a corto y largo plazo, lo que se debe y lo que no se debe hacer, y lo correcto o lo incorrecto.

Pero, ¿cómo construimos buenos hábitos? Bueno, puedo escribir algunos artículos solo sobre este tema, pero enfatizaré estos puntos por ahora:

  • Encuentra un hábito y una razón por la que quieres desarrollarlo. ¿Cuál es el objetivo final?
  • Encuentre un disparador para ello. Un disparador te motiva a empezar y te empuja a permanecer en él. Por ejemplo, escuchar música podría desencadenar el inicio del ejercicio físico.
  • Planifícalo conociendo tus limitaciones y todas las posibilidades que tienes de fracasar.
  • Si no pudo mantener el hábito, piense en lo que salió mal. ¿Lo necesitas? Reajustar, replanificar y empezar de nuevo.

2. Encuentra tu pasión


Tu passionte mantiene en marcha y te ayuda a vivir una vida profesional y personal motivada. La pasión es algo "individual" que puede impactar a muchas personas en tus círculos. Puede ser un apasionado de la tecnología, la salud, la escritura, cualquier cosa que le guste hacer constantemente.

Sin embargo, un consejo que recibí al principio de mi carrera fue: "no sigas tu pasión a ciegas". La pasión debe estar vinculada con tus metas, carrera y trabajo. Es fundamental encontrar la diferencia entre un hobby y una pasión. Es posible que tengas un pasatiempo que no esté relacionado con tu carrera, pero tu pasión debe estar relacionada con él.

Es fundamental identificar tu pasión, alimentarla con mucha práctica y renovarla de vez en cuando.

3. Conéctate con la gente


Social networkingpara desarrolladores y developer communitiesson influyentes en la construcción de su carrera. Tienes la oportunidad de conocer personas de ideas afines, encontrar modelos a seguir, obtener oportunidades para colaborar, aprender y encontrar trabajos.

Ya sea que sea un estudiante, un principiante o un profesional veterano, las redes sociales para desarrolladores son, sin duda, una gran opción a considerar. Las plataformas como Twitter , LinkedIn , Showwcase y Polywork son excelentes para consultar. Puede conectarse con personas de interés, aprender de ellos y contribuir.

Aprender y compartir es un ciclo maravilloso que construye conocimiento. Crece cuando salimos de los silos y aprendemos en público. Además, aprender de la experiencia de otros acelerará nuestro crecimiento. Entonces, conéctate.

4. Mantente curioso


Curiosityes el deseo de aprender algo nuevo. Manténgase curioso y esté abierto a aprender. La curiosidad trae preguntas y dudas a la mente. La diversión está en encontrar las respuestas.

Por lo tanto, haga preguntas cuando tenga dudas, no sea tímido pensando si es una pregunta tonta, qué pensará la gente, etc.

Mantener la curiosidad lo ayudará a descubrir cómo funcionan las cosas debajo del capó. Hay muchos beneficios de conocer el interior de las cosas cuando se trata de programación. Entonces, mantente curioso y sigue explorando.

5. Desarrolla actividades paralelas


Aquí viene mi punto favorito, Side Hustles. Cuando creas el hábito de hacer cosas, alimentas tu pasión dirigida hacia la meta profesional, buscas aprender cosas nuevas y te conectas con las personas, tienes un océano de oportunidades para los ajetreos secundarios.

Pero espera, ¿qué son los ajetreos secundarios y por qué son necesarios? ¿No tenemos suficientes cosas que hacer ya? Sí, preguntas muy prácticas. Vamos a llegar a ellos uno por uno.

Los ajetreos secundarios son cualquier cosa que haga fuera de su trabajo habitual para ganar conocimiento, reputación, dinero y crecimiento. Hay varias formas de actividades secundarias como,

  • Contribuir a los proyectos de código abierto
  • Escribir artículos en un blog.
  • tutoría
  • Enseñando
  • autónomo
  • Construcción comunitaria
  • Lanzamiento de libros, libros electrónicos
  • Hablando en conferencias
  • Crear contenido de video... y mucho más

Ahora, todo esto necesita tiempo y, por supuesto, es posible que tenga que ocuparse de algo llamado trabajo "principal". Sin embargo, la mayoría de los anteriores no necesitan una gran cantidad de tiempo o dedicación. Además, todos estos pueden ser el subproducto de su trabajo "principal".

Tomemos algunos ejemplos:

  • ¿Ha resuelto un problema técnico en el trabajo? Escribir sobre ello como un artículo. Crea un video explicando los pasos y súbelo a YouTube. Compártalo en StackOverflow, la comunidad de Showwcase, Twitter y LinkedIn.
  • ¿Tiene experiencia en áreas específicas y ha tomado muchas notas sobre la resolución de problemas? Muévalos a un documento y publíquelos como un libro electrónico. No te preocupes por quién hará uso de ellos. Siempre hay una gran demanda de contenido de calidad.
  • ¿Te encanta enseñar? Pase 1 hora durante los fines de semana interactuando con personas interesadas en sus áreas de especialización. Hablar sobre el tema en una conferencia.

Es lo correcto si puede manejar los ajetreos secundarios sin agotarse. He capturado parte de mi experiencia personal haciendo proyectos paralelos como desarrollador aquí .

6. No descuides las habilidades interpersonales


Soft skillstienen que ver con cómo los humanos interactúan con otros seres humanos en el trabajo, en la vida personal, en cualquier parte del mundo y en cualquier modo posible (físicamente, de forma remota, virtualmente). A diferencia de las habilidades técnicas, las habilidades blandas tienen menos que ver con el aprendizaje y más con la realización.

Aquí hay algunas habilidades blandas que necesitan atención especial,

  • Paciencia
  • Empatía
  • resolución de problemas
  • Comunicación (no solo el lenguaje hablado o escrito, incluye lenguaje corporal, confianza, resolución de conflictos y más)
  • Trabajo en equipo
  • Reconocer sus errores o responsabilidad
  • Gestión del tiempo: hablaremos de ello en un momento.

Algunas clases y cursos te enseñan muchas de estas habilidades interpersonales. Pero debe trabajar para cerrar la brecha usted mismo y mejorar estas habilidades gradualmente.

7. Administra tu tiempo


Permítanme comenzar con una confesión. Todavía estoy aprendiendo a administrar el tiempo, pero la buena noticia es que estoy mejorando.

Cada uno de nosotros tiene 24 horas en un día. Así que tenemos que gestionar todas nuestras actividades en esa duración. Sin embargo, el problema viene con demasiadas cosas para caber en esa duración.

Aquí hay algunas prácticas (principios también) que he estado siguiendo y viendo buenos resultados.

  • No todo es crucial para nosotros todos los días. La parte difícil es que asumimos que algo es esencial hasta que pensamos lo suficiente.
  • Entonces, tenemos que pensar y priorizar. También incluye actividades regulares como dormir, hacer ejercicio, comer a tiempo, salud, cuidado de la familia, etc.
  • No se concentre en cosas que tienen menor prioridad o que pueden esperar hasta el día o la semana siguiente.
  • No multitarea. Solo aumenta el estrés y reduce la productividad a largo plazo. Asuma una tarea, concéntrese en ella en un marco de tiempo, complétela y luego pase a la siguiente.
  • Tome descansos entre cambios de tareas. Rejuvenecerse y energizarse.
  • Si algo está tomando más tiempo de lo previsto, acepte que sucede. Es posible que no cumpla con su plan de gestión del tiempo todos los días.

Espero que estos consejos te ayuden con suficientes procesos de pensamiento para comenzar a administrar mejor el tiempo.

8. Encuentra un mentor


Hazte un favor. Encuentra un buen mentor. Aprender del conocimiento y la experiencia de alguien es inmensamente beneficioso. Entonces, comprendamos quién puede ser un mentor, cuál es su función y cómo podemos beneficiarnos como aprendices.

A mentores una persona que te brinda orientación y consejos para hacer realidad tus aspiraciones. Podría ser para desarrollar una carrera, aprender una nueva área, comprender los procesos comerciales y muchos más.

Un mentor puede ayudar a compartir experiencias y recursos, brindar motivación y establecer y realizar un seguimiento de las metas individuales y del proyecto. Un mentor también puede ser un maestro, pero en la mayoría de los casos, la enseñanza se enfoca en el "cómo" y la tutoría se enfoca en el "por qué".

A menteees una persona que está siendo asesorada, guiada y aconsejada por un mentor. Un aprendiz se acerca a un mentor con aspiraciones, ambiciones y deseos. El mentor guía al aprendiz para ayudarlo a alcanzar sus objetivos.

El aprendiz lo lleva al éxito con la ayuda del mentor en un programa de mentoría. El aprendiz decide cuánta ayuda y orientación necesita para lograr el objetivo de la tutoría.

Una relación de mentor y aprendiz debe ir más allá de la tecnología y el intercambio de conocimientos del proyecto. También se trata de comprender el espacio emocional de cada uno para lograr los objetivos de la tutoría.

Ahora la parte más crucial es finding a good mentor. Varias plataformas ofrecen tutoría. Hay algunos grandes mentores que crean valores para muchas personas aspirantes. Siempre puedes probar suerte y encontrar la mejor conexión. Siento que es más auténtico si encuentra a alguien de su red o círculo comunitario a quien conozca personalmente. Eso incluso puede funcionar mucho mejor.

En resumen

Para resumir, concéntrese en estos puntos con todos los indicadores que hemos discutido en este artículo:

  • Desarrolla buenos hábitos.
  • Encuentra tu pasión con cuidado.
  • Conéctese con personas de ideas afines y construya su red.
  • Mantente curioso y sigue aprendiendo.
  • Usa ajetreos secundarios para crecer.
  • Las habilidades blandas son esenciales.
  • Aprende a administrar tu tiempo.
  • Encuentra un mentor.

Antes de terminar...

Espero que hayas encontrado este artículo revelador y que te ayude a prepararte mejor en tu carrera. 

Nos vemos pronto con mi próximo artículo. Hasta entonces, cuídate y mantente feliz. 


#tech #job #industrial 

Cómo Prepararse Para Un Trabajo En Tecnología Para Principiantes