Haylee  Dibbert

Haylee Dibbert

1639441020

How to Den Workflow als Programmierer verbessern!

In diesem Video rede ich darüber, wie man seinen Workflow als Programmierer verbessern kann. Mich hat dieses Thema in den letzten Tagen beschäftigt und ich wollte einfach mal ein kleines Video dazu machen!

#programmiing 

What is GEEK

Buddha Community

How to Den Workflow als Programmierer verbessern!
Fredy  Larson

Fredy Larson

1604020611

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts. Today I am posting the final two in the series.

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Questions about best practices

Q: Business data versus workflow data: if you cannot tear them apart, how can you keep them consistent? Are the eventual/transactional consistency problems simpler or more complex with Camunda BPM in the equation?

This is quite a complex question, as it depends on the exact architecture and technology you want to use.

Example 1: You use Camunda embedded as a library, probably using the Spring Boot starter. In this case, your business data could live in the same database as the workflow context. In this case you can join one ACID transaction and everything will be strongly consistent.

Example 2: You leverage Camunda Cloud and code your service in Node.JS, storing data in some database. Now you have no shared transaction. No you start living in the eventual consistent world, and need to rely on “at-least-once” semantics. This is not a problem per se, but at least requires some thinking about the situations that can arise. I should probably write an own piece about that, but I had used this picture in the past to explain the problem (and this very basic blog post might help also):

So you can end up with money charged on the credit card, but the workflow not knowing about it. But in this case you leverage the retry capabilities and will be fine soon (=eventually).

#microservices #monitoring-microservices #microservices-or #workflow-automation #process-automation #bpmn #workflow #developers-workflow

Haylee  Dibbert

Haylee Dibbert

1639441020

How to Den Workflow als Programmierer verbessern!

In diesem Video rede ich darüber, wie man seinen Workflow als Programmierer verbessern kann. Mich hat dieses Thema in den letzten Tagen beschäftigt und ich wollte einfach mal ein kleines Video dazu machen!

#programmiing 

Roberta  Ward

Roberta Ward

1601359981

Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation

On Wednesday, March 11, 2020, I conducted the webinar titled “Monitoring & Orchestrating Your Microservices Landscape using Workflow Automation”. Not only was I overwhelmed by the number of attendees, but we also got a huge list of interesting questions before and especially during the webinar. Some of them were answered, but a lot of them were not. I want to answer all open questions in this series of seven blog posts.

Part 1: BPMN & modeling related questions (6 answers)

Part 2: Architecture related questions (12)

Part 3: Stack & technology questions (6)

Part 4: Camunda product-related questions (5)

Part 5: Camunda Optimize specific questions (3)

Part 6: Questions about best practices (5)

Part 7: Questions around project layout, journey and value proposition (3)

Note that we also started to experiment with the Camunda’s question corner and discuss to make this more frequent, so keep an eye to our community for more opportunities to ask anything (especially as in-person events are canceled for some time).

Camunda product-related questions

Q: What is the difference between Camunda BPM and Zeebe?

Or different forms of asking the same question: How do you position Camunda BPM vs Zeebe in relation to this presentation? Is Camunda BPM still the best/most reliable solution for microservice architecture with orchestration flows? Or is Zeebe the recommended route for such a new project?

To get everybody on the same page first, within Camunda we have two open-source workflow engine projects:

Camunda BPM: A BPMN workflow engine, that persists state via a relational database. The engine itself is stateless, and if you cluster the engine all nodes meet in the database.Zeebe: A BPMN workflow engine, that persists state on its own (kind of event sourcing). Zeebe forms an own distributed system and replicates its state to other nodes using a RAFT protocol. If you want to learn more about if check out Zeebe.io — a horizontally scalable distributed workflow engine.

#microservices #workflow-autom #microservices-or #monitoring-micro #bpmn-workflow #workflow-modeling #camunda #zeebe

Part 3: Data Science Workflow - KDnuggets

Learn and appreciate the typical workflow for a data science project, including data preparation (extraction, cleaning, and understanding), analysis (modeling), reflection (finding new paths), and communication of the results to others.


comments

**By **Sciforce.

Data science workflow.

By now, you have already gained enough knowledge and skills about Data Science and have built your first (or even your second and third) project. At this point, it is time to improve your workflow to facilitate further development process.

There is no specific template for solving any data science problem (otherwise, you’d see it in the first textbook you come across). Each new dataset and each new problem will lead to a different roadmap. However, there are similar high-level steps in many different projects.

In this post, we offer a clean workflow that can be used as a basis for data science projects. Every stage and step in it, of course, can be addressed on its own and can even be implemented by different specialists in larger-scale projects.

Framing the problem and the goals

As you already know, at the starting point, you’re asking questions and trying to get a handle on what data you need. Therefore, think of the problem you are trying to solve. What do you want to learn more about? For now, forget about modeling, evaluation metrics, and data science-related things. Clearly stating your problem and defining goals are the first step to providing a good solution. Without it, you could lose the track in the data-science forest.

Data Preparation Phase

In any Data Science project, getting the right kind of data is critical. Before any analysis can be done, you must acquire the relevant data, reformat it into a form that is amenable to computation and clean it.

Acquire data

The first step in any data science workflow is to acquire the data to analyze. Data can come from a variety of sources:

  • imported from CSV files from your local machine;
  • queried from SQL servers;
  • stripped from online repositories such as public websites;
  • streamed on-demand from online sources via an API;
  • automatically generated by physical apparatus, such as scientific lab equipment attached to computers;
  • generated by computer software, such as logs from a webserver.

In many cases, collecting data can become messy, especially if the data isn’t something people have been collecting in an organized fashion. You’ll have to work with different sources and apply a variety of tools and methods to collect a dataset.

There are several key points to remember while collecting data:

Data provenance: It is important to accurately track provenance, i.e., where each piece of data comes from and whether it is still up-to-date since data often needs to be re-acquired later to run new experiments. Re-acquisition can be helpful if the original data sources get updated or if researchers want to test alternate hypotheses. Besides, we can use provenance to trace back downstream analysis errors to the original data sources.

Data management: To avoid data duplication and confusion between different versions, it is critical to assign proper names to data files that they create or download and then organize those files into directories. When new versions of those files are created, corresponding names should be assigned to all versions to be able to keep track of their differences. For instance, scientific lab equipment can generate hundreds or thousands of data files that scientists must name and organize before running computational analyses on them.

Data storage: With modern almost limitless access to data, it often happens that there is so much data that it cannot fit on a hard drive, so it must be stored on remote servers. While cloud services are gaining popularity, a significant amount of data analysis is still done on desktop machines with data sets that fit on modern hard drives (i.e., less than a terabyte).

Reformat and clean data

Raw data is usually not in a convenient format to run an analysis since it was formatted by somebody else without that analysis in mind. Moreover, raw data often contains semantic errors, missing entries, or inconsistent formatting, so it needs to be “cleaned” prior to analysis.

Data wrangling (munging) is the process of cleaning data, putting everything together into one workspace, and making sure your data has no faults in it. It is possible to reformat and clean the data either manually or by writing scripts. Getting all of the values in the correct format can involve stripping characters from strings, converting integers to floats, or many other things. Afterward, it is necessary to deal with missing values and null values that are common in sparse matrices. The process of handling them is called missing data imputation , where the missing data are replaced with substituted data.

#2020 jul tutorials # overviews #beginners #data science #data workflow #sciforce #workflow

Mike doctor

Mike doctor

1624561200

How I Stay Productive as a 6 Figure Entrepreneur: Notion Workflow

In this video, I’m showing you my FAVORITE productivity app, Notion. If you’re looking for an app to organize your life and make more money, this is it! Sign up for Notion here: http://bit.ly/charliechangnotion

I’ve been using Notion for about 6 months now and it has changed my workflow, productivity, and success in entrepreneurship. Working for myself from home, it’s always hard to hold myself accountable and track my multiple businesses. No other app I’ve used has come close to helping my productivity, so I’m super excited to share this video with you all.

#how i stay productive as a 6 figure entrepreneur: notion workflow #bitcoin #blockchain #workflow #productive