Chess Robot breaks child’s finger: "This is of course bad”

2022-07-29 19:21:07 By : Ms. Helen Lv

Get an update of science stories delivered straight to your inbox.

Get a daily dose of science Get “Cosmos Catch-up” (every Tuesday)

Cosmos » People » “The robot broke the child’s finger… this is of course bad”

When a chess-playing robot violated Asimov’s first rule of robotics – “A robot may not injure a human being” – by breaking a seven-year-old’s finger at the Moscow Open, the story tapped into deep-seated cultural ideas and fears about robots and technology.

“The robot broke the child’s finger,” Sergey Lazarev, president of the Moscow Chess Federation, told the TASS news agency.

“This is of course bad.”

The Guardian’s reporting of the incident suggested it was the child, rather than the robot, who had behaved unexpectedly. But people responding on social media were quick to attribute the machine with motive, with comments like: “Look, I get worked up over board games too,” or “Not now, chess robot uprising.”

Robert Sparrow, a professor of philosophy at Monash University, says this is because “Robots are often a way of telling stories about what it means to be human and about our fears of the future.”

“Situations where someone accidentally staples their finger or reaches past the guard on the machine and was injured like that, actually look pretty much the same,” he says. “It’s just that people, when they think about robots, think about machines with minds of their own.”

Sparrow says part of the problem is that most people’s knowledge of robots is derived from science fiction. As a result, they tend to overestimate the technology’s capabilities.

When someone encounters a robot they think it’s C-3PO or R2-D2 – but it’s actually more like a clock radio.

Due to the way humans view and interact with these machines, and robots’ reliance on features like computer vision, big data and artificial intelligence, robotics poses a range of ethical and human rights issues. Safety – as highlighted by the chess incident – is key, along with concerns around privacy, discrimination and transparency.

There is also the broader effect on society and human relationships when robots take over tasks.

Reassuringly, Australia has frameworks and laws in place that can provide guidance to designers or assist when things go wrong.

“We want all the machines and systems that we interact with to be safe, to not be spying on us,” Sparrow says, “We want consumer rights in relation to these technologies.”

People notoriously over-trust robots and technology. “It’s called automation bias,” Sparrow says. If a person sees a machine working well 95% of the time, they assume it will always work well.

This becomes a problem in certain settings. Take driverless vehicles:

“If you’re driving along a freeway, the car seems to be driving itself,” Sparrow says, “So you fall asleep, or you start reading a book.”

“And then a kangaroo jumps out.”

“People, when they think about robots, think about machines with minds of their own.”

In this example, he says, people are generally not ready to take back control.

To prevent further chess injuries, and more serious incidents like industrial accidents or crashes involving autonomous vehicles, Sparrow says there needs to be a cautious approach to the design of machines that interact with people.

When things go wrong, the responsibility can usually be traced back to a human.

In the case of the chess incident, responsibility could lie with the robot’s designer for failing to anticipate the range of human responses.

Or, as Sergey Smagin, vice-president of the Russian Chess Federation, implied – the child could be at fault, having violated the safety rules.

Maria O’Sullivan, an associate law professor and deputy director of the Castan Centre for Human Rights Law at Monash, says a key takeaway from the chess incident is that robots generally aren’t as smart as people assume, or as sophisticated as they appear.

“When you’ve got a human interacting with the robot, the robot is really simplistic and it doesn’t deal well with an unexpected event,” she says.

O’Sullivan says there are frameworks in place in Australia for when things go wrong with new technologies, like consumer laws that cover product liability and safety standards. Australia also has ethical frameworks for the design of artificial intelligence and technologies.

Get an update of science stories delivered straight to your inbox.

Get a daily dose of science Get “Cosmos Catch-up” (every Tuesday)

In 2021, the Australian Human Rights Commission released a report on technology and human rights outlining an approach to new and emerging technologies that is consultative, inclusive and accountable, with robust human rights safeguards.

The report made several recommendations including better transparency and legal accountability when governments or the private sector use AI technologies in decision making, and an independent safety commissioner to provide guidelines on best practice and monitor the technology’s use.

Australia has a voluntary AI ethics framework that outlines eight principles including: human, societal and environmental wellbeing; human-centred values; fairness; privacy protection; reliability and safety; transparency; contestability; and accountability.

O’Sullivan says while there can be benefits to new technologies like robots, there can also be unintended consequences and human rights implications.

For example, drones can be used for commercial purposes like deliveries or humanitarian tasks.

But when drones are used as autonomous weapons they lead to serious consequences – military drones are designed and deployed to kill.

O’Sullivan says many ethicists argue the use of such weapons could make war more inevitable, because there are fewer physical consequences for the aggressor: “It will mean that countries will go to war more frequently because they don’t have that problem with the body bags.”

Discrimination and privacy concerns can arise from the large data sources that robots rely on to operate, or might be collecting and transmitting as they work. Sparrow says the Roomba robotic vacuum cleaner “effectively produces a map of your house … the size of your house and where your furniture is. That information is commercially valuable.”

Many robots draw on AI and machine learning technologies with the potential to repeat and even amplify bias. In 2018, Reuters reported Amazon stopped using an AI hiring tool because it was sexist and discriminated against women. The problem occurred because the tool was trained on resumes of previous successful candidates, who were mainly men.

Facial recognition technologies are beset with concerns about racial and gender bias.

As robots become more sophisticated and human-like, concerns about deception and transparency are emerging.

Social robots like a toy that says hello, or conversation-capable smart speakers like Siri or Alexa are designed to relate to and engage with people. They are becoming more common.

“People said, ‘oh that robot was a sore loser’ or ‘that robot got angry and lost its temper.’”

But as Sparrow points out, these technologies merely imitate empathy or care, “when they’re actually just mining the internet for what people have said in the same situation.”

“A big part of designing these kinds of social agents is essentially manipulating the user.”

In the case of ex-Google employee Blake Lemoine, a chat bot had become so convincing the engineer believed it to be sentient.

O’Sullivan says sex robots are an extreme example of deception. These robots are purchased for both sex and companionship and are acting as a human substitute. In some cases sex robots are even programmed to exhibit emotions and even tell humans, “I love you.”

What happens if a human forms an emotional bond with the robot as a result, or mistreats a sex robot? This might raise questions about the role of consent, and what human-robot relationships might mean for human relationships more broadly.

For what it’s worth, the chess robot wasn’t passing itself off as human.

“It looked pretty rudimentary – you could tell it was all technical,” O’Sullivan says.

Indeed, video of the incident reveals the finger-breaking robot is essentially a large, disembodied robotic arm.

Even so, she says, the response on social media was immediately to imbue the machine arm with human-like tendencies.

“People said, ‘oh that robot was a sore loser’ or ‘that robot got angry and lost its temper.’”

Beyond important ethical issues like safety, privacy, discrimination and transparency, an overarching concern for Sparrow is the flow-on effect for human relationships and society. When robots take over tasks – reducing the level of human interaction and people’s overall sense of agency – the problem becomes a sense of disempowerment.

“When it comes to our technological future, people feel that they have no choice in the matter,” Sparrow says.

“You’re constantly told ‘Robots and AI are going to change everything’, and you’re just supposed to applaud.”

“Whereas if some politician said ‘Look, I’m going to change everything’, you would say ‘Hang on a minute – we want a vote.’”

Originally published by Cosmos as “The robot broke the child’s finger… this is of course bad”

Petra Stock has a degree in environmental engineering and a Masters in Journalism from University of Melbourne. She has previously worked as a climate and energy analyst.

Get an update of science stories delivered straight to your inbox.

Get a daily dose of science Get “Cosmos Catch-up” (every Tuesday)

Chess robot breaks a child’s finger, COVID-19 pandemic reveals bias in healthcare, Susan Scott’s Next Big Thing, Australian navigation systems on the Moon, technology and fashion.

Getting healthcare greener, building ocean energy in Albany, gravitational waves, mite-y problems, living with loud.

Electrification in the Hunter, coronasomnia, DALL-E, climate oscillations, saving the Lord Howe Island cockroach.

Renting university space to private companies, greener agriculture, AI & traffic control, video games & climate, controlled burns on country.

Radiation in space hotels, the scientific gender gap, heaven banning, poo banks, water management’s Next Big Thing.

Australia’s biogas future; career success for people with intellectual disabilities; the magic molecule that activates with exercise; will Pilbara be the largest energy hub of the world? Ken Green knows the Next Big Thing happening to the alpine ecosystem.

Australia’s looming home insurance crisis; conserving Antarctica; the evolutionary hybrid of polar bears and brown bears; can LiDAR make autonomous cars safer? Torres Strait: the seagrass capital of the world

What can Aus learn from the UK energy crisis? Can selective breeding lower methane emissions? How to collect freshwater from the air? Who are we online and why does it matter? Non invasive treatment for glaucoma has arrived but when will it come to Australia?

Fashion psychology; recognising aliens; Digital Twinning; outfoxing foxes; virtual reality in aged care.

Replacement theory; recycling solar panels; fast fashion; including tech in land restoration projects; and GPS data to tailor AFL training sessions.

Lessons learned outside the classroom; new ways to think about neurology; can Qantas long haul flights go the distance? Online platforms lack diversity; there’s mushroom in the mental-health market for a psychedelic start-up.

When Hunga Tonga-Hunga Ha’apai erupted on January 15 2022 it affected the world. To what extent? We are still learning.

'Cosmos' and 'The Science of Everything' are registered trademarks in Australia and the USA, and owned by The Royal Institution of Australia Inc.

T: 08 7120 8600 (Australia) +61 8 7120 8600 (International) Customer Service 9:00 am — 5:00 pm ACST Monday to Friday

PO Box 3652, Rundle Mall SA 5000, Australia

55 Exchange Place, Adelaide SA 5000, Australia

Get an update of science stories delivered straight to your inbox.

Get a daily dose of science Get “Cosmos Catch-up” (every Tuesday)