Maud  Rosenbaum

Maud Rosenbaum

1598718240

This robotic sloth is (slowly) monitoring endangered species

What if it was possible to observe endangered species in their own habitat around the clock? Conservationists believe that doing so is the best way to find out how to save these animals.

With a new robot from the Georgia Institute of Technology, they’re able to do just that. A research team developed a solar-powered, energy-efficient robot that moves slowly across the treetops — just like a sloth. It could be the key to making discoveries about endangered animals.

Slow is Fast

When most people think of animal-inspired robots, something fast comes to mind. Bots like Boston Dynamics’ Spot and MIT’s Cheetah Robot take inspiration from the animal kingdom with quick, nimble movements. Georgia Tech’s new SlothBot also draws inspiration from an animal, though one not commonly associated with robotics.

Like the mammal it’s named for, the SlothBot slowly moves across the treetops while monitoring animals, plants, and the environment beneath it. Visitors to the Atlanta Botanical Garden will see the first working prototype of the bot in action. It will operate on a cable strung between two trees near the Garden’s popular Canopy Walk. During its tenure, SlothBot will monitor temperature data, weather, and carbon monoxide levels.

“SlothBot embraces slowness as a design principle,” says Magnus Egerstedt, a professor from Georgia Tech’s school of electrical and computer engineering. “That’s not how robots are typically designed today, but being slow and hyper-energy efficient will allow SlothBot to linger in the environment to observe things we can only see by being present continuously for months, or even years.”

The unconventional robot measures just three feet long and features a 3D-printed shell to keep its internal components safe from the elements. Notably, SlothBot is programmed to move only when necessary. Once its batteries need to charge, the bot will autonomously seek out sunlight to do so.

Embracing Uniqueness

The key to SlothBot’s success is its unique approach to saving energy. Of course, that wouldn’t be possible without a clever design. The research team had to think carefully about its locomotion techniques before rolling out a functional prototype.

A robot with wheels is vulnerable to things like rocky terrain and mud. Drones that fly burn too much energy to remain in one area for a significant length of time. Something with tank treads — like a Mars rover — would hypothetically work, but is loud and can destroy fragile ecosystems, scaring away the wildlife it seeks to observe.

That’s why Egerstedt and his team settled on the wire-crawling design.

However, the team still did draw inspiration from the Mars Exploration Rovers. They were able to gather data about the Red Planet for more than a dozen years thanks to their energy-conscious operation. Since they explored leisurely, the rovers were able to operate for much longer than if they moved around quickly.

As for SlothBot, the team hopes that their creation will feel like a natural part of the environment rather than a foreign piece of equipment.

“It’s really fascinating to think about robots becoming part of the environment, a member of an ecosystem,” Egerstedt said. “While we’re not building an anatomical replica of the living sloth, we believe our robot can be integrated to be part of the ecosystem it’s observing like a real sloth.”

#technology #conservation #robotics #environment #science #data science

What is GEEK

Buddha Community

This robotic sloth is (slowly) monitoring endangered species
Carmen  Grimes

Carmen Grimes

1598959140

How to Monitor Third Party API Integrations

Many enterprises and SaaS companies depend on a variety of external API integrations in order to build an awesome customer experience. Some integrations may outsource certain business functionality such as handling payments or search to companies like Stripe and Algolia. You may have integrated other partners which expand the functionality of your product offering, For example, if you want to add real-time alerts to an analytics tool, you might want to integrate the PagerDuty and Slack APIs into your application.

If you’re like most companies though, you’ll soon realize you’re integrating hundreds of different vendors and partners into your app. Any one of them could have performance or functional issues impacting your customer experience. Worst yet, the reliability of an integration may be less visible than your own APIs and backend. If the login functionality is broken, you’ll have many customers complaining they cannot log into your website. However, if your Slack integration is broken, only the customers who added Slack to their account will be impacted. On top of that, since the integration is asynchronous, your customers may not realize the integration is broken until after a few days when they haven’t received any alerts for some time.

How do you ensure your API integrations are reliable and high performing? After all, if you’re selling a feature real-time alerting, you’re alerts better well be real-time and have at least once guaranteed delivery. Dropping alerts because your Slack or PagerDuty integration is unacceptable from a customer experience perspective.

What to monitor

Latency

Specific API integrations that have an exceedingly high latency could be a signal that your integration is about to fail. Maybe your pagination scheme is incorrect or the vendor has not indexed your data in the best way for you to efficiently query.

Latency best practices

Average latency only tells you half the story. An API that consistently takes one second to complete is usually better than an API with high variance. For example if an API only takes 30 milliseconds on average, but 1 out of 10 API calls take up to five seconds, then you have high variance in your customer experience. This is makes it much harder to track down bugs and harder to handle in your customer experience. This is why 90th percentile and 95th percentiles are important to look at.

Reliability

Reliability is a key metric to monitor especially since your integrating APIs that you don’t have control over. What percent of API calls are failing? In order to track reliability, you should have a rigid definition on what constitutes a failure.

Reliability best practices

While any API call that has a response status code in the 4xx or 5xx family may be considered an error, you might have specific business cases where the API appears to successfully complete yet the API call should still be considered a failure. For example, a data API integration that returns no matches or no content consistently could be considered failing even though the status code is always 200 OK. Another API could be returning bogus or incomplete data. Data validation is critical for measuring where the data returned is correct and up to date.

Not every API provider and integration partner follows suggested status code mapping

Availability

While reliability is specific to errors and functional correctness, availability and uptime is a pure infrastructure metric that measures how often a service has an outage, even if temporary. Availability is usually measured as a percentage of uptime per year or number of 9’s.

AVAILABILITY %DOWNTIME PER YEARDOWNTIME PER MONTHDOWNTIME PER WEEKDOWNTIME PER DAY90% (“one nine”)36.53 days73.05 hours16.80 hours2.40 hours99% (“two nines”)3.65 days7.31 hours1.68 hours14.40 minutes99.9% (“three nines”)8.77 hours43.83 minutes10.08 minutes1.44 minutes99.99% (“four nines”)52.60 minutes4.38 minutes1.01 minutes8.64 seconds99.999% (“five nines”)5.26 minutes26.30 seconds6.05 seconds864.00 milliseconds99.9999% (“six nines”)31.56 seconds2.63 seconds604.80 milliseconds86.40 milliseconds99.99999% (“seven nines”)3.16 seconds262.98 milliseconds60.48 milliseconds8.64 milliseconds99.999999% (“eight nines”)315.58 milliseconds26.30 milliseconds6.05 milliseconds864.00 microseconds99.9999999% (“nine nines”)31.56 milliseconds2.63 milliseconds604.80 microseconds86.40 microseconds

Usage

Many API providers are priced on API usage. Even if the API is free, they most likely have some sort of rate limiting implemented on the API to ensure bad actors are not starving out good clients. This means tracking your API usage with each integration partner is critical to understand when your current usage is close to the plan limits or their rate limits.

Usage best practices

It’s recommended to tie usage back to your end-users even if the API integration is quite downstream from your customer experience. This enables measuring the direct ROI of specific integrations and finding trends. For example, let’s say your product is a CRM, and you are paying Clearbit $199 dollars a month to enrich up to 2,500 companies. That is a direct cost you have and is tied to your customer’s usage. If you have a free tier and they are using the most of your Clearbit quota, you may want to reconsider your pricing strategy. Potentially, Clearbit enrichment should be on the paid tiers only to reduce your own cost.

How to monitor API integrations

Monitoring API integrations seems like the correct remedy to stay on top of these issues. However, traditional Application Performance Monitoring (APM) tools like New Relic and AppDynamics focus more on monitoring the health of your own websites and infrastructure. This includes infrastructure metrics like memory usage and requests per minute along with application level health such as appdex scores and latency. Of course, if you’re consuming an API that’s running in someone else’s infrastructure, you can’t just ask your third-party providers to install an APM agent that you have access to. This means you need a way to monitor the third-party APIs indirectly or via some other instrumentation methodology.

#monitoring #api integration #api monitoring #monitoring and alerting #monitoring strategies #monitoring tools #api integrations #monitoring microservices

Maud  Rosenbaum

Maud Rosenbaum

1598718240

This robotic sloth is (slowly) monitoring endangered species

What if it was possible to observe endangered species in their own habitat around the clock? Conservationists believe that doing so is the best way to find out how to save these animals.

With a new robot from the Georgia Institute of Technology, they’re able to do just that. A research team developed a solar-powered, energy-efficient robot that moves slowly across the treetops — just like a sloth. It could be the key to making discoveries about endangered animals.

Slow is Fast

When most people think of animal-inspired robots, something fast comes to mind. Bots like Boston Dynamics’ Spot and MIT’s Cheetah Robot take inspiration from the animal kingdom with quick, nimble movements. Georgia Tech’s new SlothBot also draws inspiration from an animal, though one not commonly associated with robotics.

Like the mammal it’s named for, the SlothBot slowly moves across the treetops while monitoring animals, plants, and the environment beneath it. Visitors to the Atlanta Botanical Garden will see the first working prototype of the bot in action. It will operate on a cable strung between two trees near the Garden’s popular Canopy Walk. During its tenure, SlothBot will monitor temperature data, weather, and carbon monoxide levels.

“SlothBot embraces slowness as a design principle,” says Magnus Egerstedt, a professor from Georgia Tech’s school of electrical and computer engineering. “That’s not how robots are typically designed today, but being slow and hyper-energy efficient will allow SlothBot to linger in the environment to observe things we can only see by being present continuously for months, or even years.”

The unconventional robot measures just three feet long and features a 3D-printed shell to keep its internal components safe from the elements. Notably, SlothBot is programmed to move only when necessary. Once its batteries need to charge, the bot will autonomously seek out sunlight to do so.

Embracing Uniqueness

The key to SlothBot’s success is its unique approach to saving energy. Of course, that wouldn’t be possible without a clever design. The research team had to think carefully about its locomotion techniques before rolling out a functional prototype.

A robot with wheels is vulnerable to things like rocky terrain and mud. Drones that fly burn too much energy to remain in one area for a significant length of time. Something with tank treads — like a Mars rover — would hypothetically work, but is loud and can destroy fragile ecosystems, scaring away the wildlife it seeks to observe.

That’s why Egerstedt and his team settled on the wire-crawling design.

However, the team still did draw inspiration from the Mars Exploration Rovers. They were able to gather data about the Red Planet for more than a dozen years thanks to their energy-conscious operation. Since they explored leisurely, the rovers were able to operate for much longer than if they moved around quickly.

As for SlothBot, the team hopes that their creation will feel like a natural part of the environment rather than a foreign piece of equipment.

“It’s really fascinating to think about robots becoming part of the environment, a member of an ecosystem,” Egerstedt said. “While we’re not building an anatomical replica of the living sloth, we believe our robot can be integrated to be part of the ecosystem it’s observing like a real sloth.”

#technology #conservation #robotics #environment #science #data science

Consider This: Theomorphic Robots; Not Losing Our Religion?

As icons and rituals adapt to newer technologies, the rise of robotics and AI can change the way we practice and experience spirituality.

**Some 100,000 years ago, fifteen people, eight of them children, were buried on the flank of [Mount Precipice], just outside the southern edge of [Nazareth] in today’s Israel. **One of the boys still held the antlers of a large red deer clasped to his chest, while a teenager lay next to a necklace of seashells painted with ochre and brought from the Mediterranean Sea shore 35 km away. The bodies of Qafzeh are some of the earliest evidence we have of grave offerings, possibly associated with religious practice.

Although some type of _belief _has likely accompanied us from the beginning, it’s not until 50,000–13,000 BCE that we see clear religious ideas take shape in paintings, offerings, and objects.** This is a period filled with Venus figurines, statuettes made of stone, bone, ivory and clay, portraying women with small heads, wide hips, and exaggerated breasts.** It is also the home of the beautiful** lion man**, carved out of mammoth ivory with a flint stone knife and the oldest-known zoomorphic (animal-shaped) sculpture in the world.

We’ve unearthed such representations of primordial gods, likely our first religious icons, all across Europe and as far as Siberia, and although we’ll never be able to ask their creators why they made them, we somehow still feel a connection with the stories they were trying to tell.

#robotics #artificial-intelligence #psychology #technology #hackernoon-top-story #religious-robots #robot-priest #robot-monk

Teresa  Jerde

Teresa Jerde

1596624060

Artificial Intelligence and Robotics: Who’s At Fault When Robots Kill?

Up to now, any robots brushing with the law were always running strictly according to their code. Fatal accidents and serious injuries usually only happened through human misadventure or improper use of safety systems and barriers. We’ve yet to truly test how our laws will cope with the arrival of more sophisticated automation technology — but that day isn’t very far away.

AI already infiltrates our lives on so many levels in a multitude of practical, unseen ways. While the machine revolution is fascinating — and will cause harm to humans here and there — embodied artificial intelligence systems perhaps pose the most significant challenges for lawmakers.

Robots that run according to unchanging code are one thing and have caused many deaths and accidents over the years — not just in the factory but the operating theatre too. Machines that learn as they go are a different prospect entirely — and coming up with laws for dealing with that is likely to be a gradual affair.

Emergent robot behavior and the blame game

Emergent behavior is going to make robots infinitely more effective and useful than they’ve ever been before. The potential danger with emergent behavior is that it’s unpredictable. In the past, robots got programmed for set tasks – and that was that. Staying behind the safety barrier and following established protocols kept operators safe.

#artificial-intelligence #robots #robotics #legal #blame-the-user #blame-the-maker #blame-the-robot

Future of Remote Patient Monitoring Services

With the growth of remote patient monitoring systems, healthcare software providers have been able to give easier solutions to patients and access to healthcare services has also grown. It is expected that the healthcare industry will see a huge rise in the use of remote patient monitoring services in the coming five years. The need to integrate remote patient monitoring software systems into a patients chronic disease management treatment can improve the quality of a patient’s life. Click on the link for more Information.

#remote patient monitoring integration #remote patient monitoring vendors #remote patient monitoring providers #best patient monitoring systems #best remote patient monitoring companies