Oleta  Becker

Oleta Becker

1604207400

Analyzing Sweet Maria’s Coffee Cupping Metrics

Full Disclaimer: I love coffee from Sweet Maria’s, and I’ve been regularly buying most of my green coffee from them for the past 6 years.

I looked at the sub-metrics of their coffee grades to have a better understanding of how useful they are to comparing coffees to each other. I used box-plots, correlation, and Principle Component Analysis (PCA) to understand these coffee grades. This work is another extension of the ideas comparing a large Q-grade CQI database to understand how useful their metrics for grading coffee are in terms of distinguishing coffees from one another.

Cupping Scores (modified Q-scores)

Sweet Maria’s has a slightly different cupping criteria than the SCA criteria summarized below. It is curious to see how sweetness, uniformity, and clean cup compare to the other data. Where these 3 metrics for the SCA scale start out perfect and points are deducted, Sweet Maria’s metrics give a bit more insight into the coffee.

Image for post

Raw Data

Pulling data from Sweet Maria’s was not the easiest to do. They don’t have a database to pull from, but they have an archive of over 300 beans from over the years. I pulled part of the data manually as discussed in this piece, and in pulling the data this way, I had a spider graph for each coffee.

#data-science #analysis #espresso #data #coffee

What is GEEK

Buddha Community

Analyzing Sweet Maria’s Coffee Cupping Metrics
Oleta  Becker

Oleta Becker

1604207400

Analyzing Sweet Maria’s Coffee Cupping Metrics

Full Disclaimer: I love coffee from Sweet Maria’s, and I’ve been regularly buying most of my green coffee from them for the past 6 years.

I looked at the sub-metrics of their coffee grades to have a better understanding of how useful they are to comparing coffees to each other. I used box-plots, correlation, and Principle Component Analysis (PCA) to understand these coffee grades. This work is another extension of the ideas comparing a large Q-grade CQI database to understand how useful their metrics for grading coffee are in terms of distinguishing coffees from one another.

Cupping Scores (modified Q-scores)

Sweet Maria’s has a slightly different cupping criteria than the SCA criteria summarized below. It is curious to see how sweetness, uniformity, and clean cup compare to the other data. Where these 3 metrics for the SCA scale start out perfect and points are deducted, Sweet Maria’s metrics give a bit more insight into the coffee.

Image for post

Raw Data

Pulling data from Sweet Maria’s was not the easiest to do. They don’t have a database to pull from, but they have an archive of over 300 beans from over the years. I pulled part of the data manually as discussed in this piece, and in pulling the data this way, I had a spider graph for each coffee.

#data-science #analysis #espresso #data #coffee

Madelyn  Frami

Madelyn Frami

1603133640

This LED coffee table reacts to whatever's on top

The YouTube team “Ty and Gig Builds” recently decided to make their coffee table a little more interesting , adding a chain of 96 addressable LEDs underneath its clear surface. This would have been neat enough by itself, but their project doesn’t just stop there and instead embeds 154 IR emitters and 154 IR receivers, allowing it to react to what’s on top. Beyond that, it’s able to display animations without using the sensors for a mesmerizing effect.

#arduino #led(s) #mega #interactive coffee table #led coffee table #react native

Sofia  Maggio

Sofia Maggio

1626121740

Classification Metrics

In the regular world, accuracy and precision are often interchangeable — but not when it comes to machine learning. Accuracy and precision are really important metrics that are used for model evaluation, and together with recall and F1, they make up the famous classification metrics.

confusion matrix is the best tool that can be used to completely understand why these four metrics are so important for model evaluation. Here is what it looks like:

If you are confused, don’t worry. It’s normal. This matrix is really two tables merged into one — table showing the predicted values, and another table showing the actual values. The result of merging the two tables are True Positives, True Negatives, False Positives, and False Negatives. Here is what each of them means:

  1. True Positive (TP): Predicted value and actual values are positive.
  2. True Negatives (TN): Predicted value and actual values are negative.
  3. False Negative (FN): Actual value is positive but the predicted value is negative.
  4. False Positive (FP): Actual value is negative but the predicted value is positive.

As you can see, this matrix helps to not only measure the performance of a predictive model, but also shows insight into which classes are being predicted incorrectly or correctly, and where the errors are occurring. Now that we understand a bit about the confusion matrix, let’s look at how it helps to define the classification metrics.

Accuracy

Precision

Recall

F1 Score

#machine-learning #classification-metrics #scikit-learn #metrics

Macey  Kling

Macey Kling

1598793360

Hierarchical Performance Metrics and Where to Find Them

Hierarchical machine learning models are one top-notch trick. As discussed in previous posts, considering the natural taxonomy of the data when designing our models can be well worth our while. Instead of flattening out and ignoring those inner hierarchies, we’re able to use them, making our models smarter and more accurate.

“More accurate”, I say — are they, though? How can we tell? We are people of science, after all, and we expect bold claims to be be supported by the data. This is why we have performance metrics. Whether it’s precision, f1-score, or any other lovely metric we’ve got our eye on — if using hierarchy in our models improves their performance, the metrics should show it.

Problem is, if we use regular performance metrics — the ones designed for flat, one-level classification — we go back to ignoring that natural taxonomy of the data.

If we do hierarchy, let’s do it all the way. If we’ve decided to celebrate our data’s taxonomy and build our model in its image, this needs to also be a part of measuring its performance.

How do we do this? The answer lies below.

Before We Dive In

This post is about measuring the performance of machine learning models designed for hierarchical classification. It kind of assumes you know what all those words mean. If you don’t, check out my previous posts on the topic. Especially the one introducing the subject. Really. You’re gonna want to know what hierarchical classification is before learning how to measure it. That’s kind of an obvious one.

Throughout this post, I’ll be giving examples based on this taxonomy of common house pets:

Image for post

The taxonomy of common house pets. My neighbor just adopted the cutest baby Pegasus.

Oh So Many Metrics

So we’ve got a whole ensemble of hierarchically-structured local classifiers, ready to do our bidding. How do we evaluate them?

That is not a trivial problem, and the solution is not obvious. As we’ve seen in previous problems in this series, different projects require different treatment. The best metric could differ depending on the specific requirements and limitations of your project.

All in all, there are three main options to choose from. Let’s introduce them, shall we?

The contestants, in all their grace and glory:

#machine-learning #hierarchical #performance-metrics #ensemble-learning #metrics

Here Are the Metrics you Need to Understand Operational Health

In recent polls we’ve conducted with engineers and leaders, we’ve found that around 70% of participants used MTTA and MTTR as one of their main metrics. 20% of participants cited looking at planned versus unplanned work, and 10% said they currently look at no metrics. While MTTA and MTTR are good starting points, they’re no longer enough. With the rise in complexity, it can be difficult to gain insights into your services’ operational health.

In this blog post, we’ll walk you through holistic measures and best practices that you can employ starting today. These will include challenges and pain points in gaining insight as well as key metrics and how they evolve as organizations mature.

Pain Points for Creating Useful Metrics

It’s easy to fall into the trap of being data rich but information poor. Building metrics and dashboards with the right context is crucial to understanding operational health, but where do you start? It’s important to look at roadblocks to adoption thus far in your organization. Perhaps other teams (or even your team) have looked into the way you measure success before. What halted their progress? If metrics haven’t undergone any change recently, why is that?

Below are some of the top customer pain points and challenges that we typically see software and infrastructure teams encounter.

  • Lack of data: Your data is fragmented across your APM, ticketing, chatops, and other tools. Even worse, it’s typically also siloed across teams that run at different speeds. A lot of it is tribal knowledge, or it simply doesn’t exist.
  • No feedback loop: There’s limited to no integration between incidents, retrospectives, follow-up action items, planned work, and customer experience. It’s challenging to understand how it all ties together as well as pinpoint how to improve customer experience. You’re constantly being redirected by unplanned work and incidents.
  • Blank slate: Traditional APM and analytics tools are great for insights, but without a baseline of metrics that are prescriptive and based on operational best practices, it’s hard to know where to start.
  • One-size-fits-all: What works for one team won’t necessarily work for another. Everything needs to be put in the right context to provide truly relevant insights.

With these pain points in mind, let’s look at some key metrics other organizations we’ve spoken to have found success with.

#devops #metrics #site reliability engineering #site reliability #site reliability engineer #metrics monitoring #site reliability engineering tools