Madyson  Reilly

Madyson Reilly

1599126480

Intro to the Uncanny Valley for Robots

The human brain is wired around faces: they’re the center of how we see emotions which causes us to see them in everyday objects. Now, we are re-wiring our brains with our phones without a real plan regarding the long term effects. What happens when we give our phones a face — the feedback loop will be even more amplified.

This article considers the effects of robot design as an emotional tool and why we are not prepared at all for the repercussions. It’s a tour of the uncanny valley, robotic style, robotic examples, and where we can start.

Image for post

Source-Author.

The Uncanny Valley

Its original namewas_ bukimi no tani genshō — _which from Japaneseroughly translates to _Valley of Eeriness. _The Uncanny Valley is the drop in human likeness for robots on the path from unlike humans to nearly indiscernible from us — the likeness drops when the robots are in between awkward and perfect (uncanniness is a familiar eeriness).

Japanese roboticist Masahiro Mori is credited with the term. It does seem like quite a Japanese concept: requires observation, introspective, human, artistic, etc. I like Japan, but this is an aside. A visualization of it is below [source]. From bottom to top is how much a human will enjoy spending time with the robot, and from left to right is how human the robot is.

Image for post

Source.

Starting 2020, we are at the point where robots (more later on some examples):

a) are good at some tasks and way better than humans at others,

b) can be made to look eerily human — when stationary (ie by their construction), not in movement, which still is jerky and preliminary.

How does this work?

I found this summary of why the uncanny valley appears useful:

Mori’s original hypothesis states that as the appearance of a robot is made more human, some observers’ emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion. However, as the robot’s appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.

This area of repulsive response aroused by a robot with appearance and motion between a “barely human” and “fully human” entity is the uncanny valley. The name captures the idea that an almost human-looking robot seems overly “strange” to some human beings, produces a feeling of uncanniness, and thus fails to evoke the empathic response required for productive human–robot interaction.

The last sentence is the crucial one: a robot “fails to evoke the empathic response required for productive human-robot interaction.” To me there are two key phrases: a) evocation of an emphatic response and b) an indication of productive human-robot interaction. This leads me to the questions: if making robots look human causes failed interactions, why make them look human at all? Do we have no risk if they’re never made to look human? I think it is crucial to consider, and expect the answer to be that a near majority of robots do not need to look human. The categories that benefit from humanness, like medical and social robots need an increased level of scrutiny because those applications fall on a knife’s edge of risk.

Below is a fun experiment where they measured reactions to a bunch of robots that people rated on “mechano-humanness score.” The findings match the theory pretty well, except they may be forcing a cubic function to the data, and there are impressively few points in the actual _uncanny valley — _we’ll have to keep an eye on this type of data over time.

Image for post

[Source: Mathur, Maya B.; Reichling, David B. (2016). “Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley”Cognition146: 22–32. doi:10.1016/j.cognition.2015.09.008PMID26402646.]

#automation #robotics #artificial-intelligence #technology

What is GEEK

Buddha Community

Intro to the Uncanny Valley for Robots

Consider This: Theomorphic Robots; Not Losing Our Religion?

As icons and rituals adapt to newer technologies, the rise of robotics and AI can change the way we practice and experience spirituality.

**Some 100,000 years ago, fifteen people, eight of them children, were buried on the flank of [Mount Precipice], just outside the southern edge of [Nazareth] in today’s Israel. **One of the boys still held the antlers of a large red deer clasped to his chest, while a teenager lay next to a necklace of seashells painted with ochre and brought from the Mediterranean Sea shore 35 km away. The bodies of Qafzeh are some of the earliest evidence we have of grave offerings, possibly associated with religious practice.

Although some type of _belief _has likely accompanied us from the beginning, it’s not until 50,000–13,000 BCE that we see clear religious ideas take shape in paintings, offerings, and objects.** This is a period filled with Venus figurines, statuettes made of stone, bone, ivory and clay, portraying women with small heads, wide hips, and exaggerated breasts.** It is also the home of the beautiful** lion man**, carved out of mammoth ivory with a flint stone knife and the oldest-known zoomorphic (animal-shaped) sculpture in the world.

We’ve unearthed such representations of primordial gods, likely our first religious icons, all across Europe and as far as Siberia, and although we’ll never be able to ask their creators why they made them, we somehow still feel a connection with the stories they were trying to tell.

#robotics #artificial-intelligence #psychology #technology #hackernoon-top-story #religious-robots #robot-priest #robot-monk

Teresa  Jerde

Teresa Jerde

1596624060

Artificial Intelligence and Robotics: Who’s At Fault When Robots Kill?

Up to now, any robots brushing with the law were always running strictly according to their code. Fatal accidents and serious injuries usually only happened through human misadventure or improper use of safety systems and barriers. We’ve yet to truly test how our laws will cope with the arrival of more sophisticated automation technology — but that day isn’t very far away.

AI already infiltrates our lives on so many levels in a multitude of practical, unseen ways. While the machine revolution is fascinating — and will cause harm to humans here and there — embodied artificial intelligence systems perhaps pose the most significant challenges for lawmakers.

Robots that run according to unchanging code are one thing and have caused many deaths and accidents over the years — not just in the factory but the operating theatre too. Machines that learn as they go are a different prospect entirely — and coming up with laws for dealing with that is likely to be a gradual affair.

Emergent robot behavior and the blame game

Emergent behavior is going to make robots infinitely more effective and useful than they’ve ever been before. The potential danger with emergent behavior is that it’s unpredictable. In the past, robots got programmed for set tasks – and that was that. Staying behind the safety barrier and following established protocols kept operators safe.

#artificial-intelligence #robots #robotics #legal #blame-the-user #blame-the-maker #blame-the-robot

Madyson  Reilly

Madyson Reilly

1599126480

Intro to the Uncanny Valley for Robots

The human brain is wired around faces: they’re the center of how we see emotions which causes us to see them in everyday objects. Now, we are re-wiring our brains with our phones without a real plan regarding the long term effects. What happens when we give our phones a face — the feedback loop will be even more amplified.

This article considers the effects of robot design as an emotional tool and why we are not prepared at all for the repercussions. It’s a tour of the uncanny valley, robotic style, robotic examples, and where we can start.

Image for post

Source-Author.

The Uncanny Valley

Its original namewas_ bukimi no tani genshō — _which from Japaneseroughly translates to _Valley of Eeriness. _The Uncanny Valley is the drop in human likeness for robots on the path from unlike humans to nearly indiscernible from us — the likeness drops when the robots are in between awkward and perfect (uncanniness is a familiar eeriness).

Japanese roboticist Masahiro Mori is credited with the term. It does seem like quite a Japanese concept: requires observation, introspective, human, artistic, etc. I like Japan, but this is an aside. A visualization of it is below [source]. From bottom to top is how much a human will enjoy spending time with the robot, and from left to right is how human the robot is.

Image for post

Source.

Starting 2020, we are at the point where robots (more later on some examples):

a) are good at some tasks and way better than humans at others,

b) can be made to look eerily human — when stationary (ie by their construction), not in movement, which still is jerky and preliminary.

How does this work?

I found this summary of why the uncanny valley appears useful:

Mori’s original hypothesis states that as the appearance of a robot is made more human, some observers’ emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion. However, as the robot’s appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.

This area of repulsive response aroused by a robot with appearance and motion between a “barely human” and “fully human” entity is the uncanny valley. The name captures the idea that an almost human-looking robot seems overly “strange” to some human beings, produces a feeling of uncanniness, and thus fails to evoke the empathic response required for productive human–robot interaction.

The last sentence is the crucial one: a robot “fails to evoke the empathic response required for productive human-robot interaction.” To me there are two key phrases: a) evocation of an emphatic response and b) an indication of productive human-robot interaction. This leads me to the questions: if making robots look human causes failed interactions, why make them look human at all? Do we have no risk if they’re never made to look human? I think it is crucial to consider, and expect the answer to be that a near majority of robots do not need to look human. The categories that benefit from humanness, like medical and social robots need an increased level of scrutiny because those applications fall on a knife’s edge of risk.

Below is a fun experiment where they measured reactions to a bunch of robots that people rated on “mechano-humanness score.” The findings match the theory pretty well, except they may be forcing a cubic function to the data, and there are impressively few points in the actual _uncanny valley — _we’ll have to keep an eye on this type of data over time.

Image for post

[Source: Mathur, Maya B.; Reichling, David B. (2016). “Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley”Cognition146: 22–32. doi:10.1016/j.cognition.2015.09.008PMID26402646.]

#automation #robotics #artificial-intelligence #technology

Madilyn  Kihn

Madilyn Kihn

1598377102

4 of The Most Unique Robots

4 of the Most Unique Robots
Everybody hold on. Our world will soon be flooded with robots of every shape, style, or function. No sector in our society will be excluded from the imminent onslaught of robotics and artificial intelligence.
When you consider how artificial intelligence today can write its own code to update itself, there is no limit to what robots can achieve.

#technology #robots #artificial-intelligence #robotics

Archie  Powell

Archie Powell

1625965920

How to Invest in Robotics and Artificial Intelligence

Learn more about the Market Conditions and Invest in Robotics and Artificial Intelligence

We frequently put robotics and artificial intelligence together, but they are two separate fields. The robotics and artificial intelligence industries are some of the largest markets in the tech space today. Almost every industry in the world is adopting these technologies to boost growth and increase customer engagement.

**Best Robotics Stocks to Invest in- **
  • Oceaneering International. Inc: Oceaneering, is engineering and applied technology service provider to different industries like oil and gas, aerospace, marine, defense, entertainment, logistics, science, and renewable energy sources. The company aims to provide unmatched services to its customers to develop, regardless of the market conditions.
  • Brooks Automation, Inc: Brooks Automation, is a provider of automation, vacuum, and instrumentation solutions for semiconductor manufacturing, life sciences, and other industries. Recently, the company announced that it will split into two independent companies, one of which will focus completely on the life sciences industry and the other will focus on the high innovation automation technology.
  • **FLIR Systems: **FLIR manufactures, develops, distributes, and markets technologies, that enhance perception and awareness. The company provides advanced systems and components, that are used for thermal imaging, situational awareness, and security applications, including navigation, recreation, research, and development.
The Market Overview of Artificial Intelligence-

According to the reports, the global AI market is expected to grow from US$58.3 billion in 2021 to US$309.6 billion by 2026. Among the many factors that will drive the growth in the artificial intelligence market, the Covid-19 pandemic is the chief reason.

Best AI Stocks to Invest in-
  • Tata Elxsi: Over the past decade, Tata Elxsi, has been facilitating tech-based advancements. Starting from self-driving cars to video analytics solutions, the company provides groundbreaking technologies powered by artificial intelligence and analytics.
  • **Bosch: **The Bosch Center for Artificial Intelligence (BCAI), works towards producing innovative AI technologies and implementing them in their own products to have a real-world impact.
  • **Happiest Minds: **Happiest Minds, is helping organizations to provide enhanced customer services, combined with augmented intelligence and natural language processing, image analytics, video analytics, and other services. The company aims to create next-generation smart systems that can think, learn and create with an intelligence equivalent to humans.

#artificial intelligence #latest news #robotics #robotics