Early “robots” in agriculture were actually automated harvesting machines. We’ve added intelligence and decision-making capability and now robots harvest, plant, seed, prune, do herbicide management, monitor and observe the characteristics of crops – all the main tasks.
Connecting those robots into the cloud means the robot gets information from sensors and allows farmers to make future decisions based on real-life data. For example, you can distinguish between what should be harvested and what should be left in the field – and estimate the amount of harvest before picking to plan better and reduce costs.
Robots are not replacing people in agriculture – there aren’t enough humans to do the work and that is becoming a greater problem. Globally, younger generations don’t want to work in the fields. It’s better to start doing things with robots now than wait until we don’t have a human labor force. Robots can also do tasks humans shouldn’t do, like carrying 20kg of fruit on your back.
You also have environmental benefits as robots are largely electrically controlled, rather than using combustion engines.
One challenge is that agriculture can be resistant to change and wants clear evidence that technology works. If it doesn’t work, you have to wait until next year, and try something different.
Another challenge is that different fruits and vegetables need different solutions in different countries. If we design a robot that works in Scotland it might not work well in Brazil or Australia, where the weather, soil and humidity are different. We need technology to help countries where there isn’t enough water, for example, to spread herbicides efficiently so they are not damaging soils, and to monitor all the different indicators.
Ultimately, it’s about using robotics, AI and data to make agriculture more efficient and deliver larger, better quality and more predictable crops. The world’s population is growing and we need more food. In Europe, we’re taking this into vertical farming, where the inputs are controlled in an industrial process. That’s developing all the time, but there is still so much robotics can do for traditional field agriculture.
Helping blind people ‘see’ the world around them
Verena Rieser, Professor in Computer Science
We worked with the RNIB on the Be My Eyes app, which blind and partially sighted people can use to connect with sighted volunteers.
They can take their phone and point it at the back of a cereal box or medication and the sighted person reads it.
Human involvement limits when you can do this – and if you slip in the bathroom, you don’t necessarily want to call a person you’ve never met.
We thought: “What if we had AI available 24/7? How could that help people to live more independently – to be their eyes?”
We’re focused on three challenges. First, how can these models be improved for use in a conversation? When you talk to someone, you ask follow-up questions, not just one. How could we make the AI conversational?
Then we asked whether these models can adapt to new tasks and users. They are pre-trained on a massive data set, then essentially fixed. In the real world, things change. You might move house, or travel to a different country. How do robots adapt?
The third challenge is quality. Current models are trained using high-definition pictures. If a partially-sighted person is the photographer, they don’t always take a great picture. Pictures might be blurry, rotated or obscured.
The overall aim is to help people to live more independent lives, to have an assistant or companion there for them 24/7 – and to live safer lives. The model can make mistakes; they can’t be super-confident and need to be able to communicate uncertainty to warn the user to be careful.
We want to make this a reality, to help people live fuller and safer lives where they’ve got more opportunities to understand and interpret the world about them more effectively.
Data Capital: Addressing universal inequalities
Reducing anxiety in dementia patients
Dr Christian Dondrup, Assistant Professor
I’m excited about the potential of robots to have conversations with people who are anxious while waiting for medical appointments, especially those with dementia.
Heriot-Watt is working with a range of European partners and we’ve shown the robots can work in our lab, and that of our Paris-based partner.
We used students to show the robot can chat effectively to patients in a mock hospital. The robot makes small talk, discusses the news or weather, asks if the patient wants to do a quiz, and tells them practical things like where to get food, or helps them find lost items like a bag or phone. Hospital staff often don’t have the time to do these things.
The dialogue worked well. The people involved were happy with what the robots said and how they replied.
The technical side of identifying what was in front of the robot worked quite well too, and we collected good data. We need to improve the screening out of background noise, but when the robot understands what the person says, and there isn’t much delay before the robot replies.
We’re working on speeding up the navigation of the robot, so it can identify someone who looks anxious or who might need help, and approach them to start a conversation. This summer, the robot will be tested in a real hospital in France. Previously, it was remote-controlled. Now, for the first time, it will make its own decisions in front of a patient.
This robot is quite tall – 1.65 meters. We understand patients might be taken aback when they see it moving towards them, but we think a light-hearted conversation with the robot will mitigate any concerns. I have two Masters students working on gathering people’s opinions on it.
The new National Robotarium will have a mock-up of a hospital where we can collaborate with elderly care service providers. We’re keen for real-life partnerships to realize the potential of this work for people with dementia.
The system has the potential to work in care homes as well as hospitals. It’s all about trying to get proper conversations going, talking to people for longer, keeping them engaged and entertained, and reducing loneliness.
Longer term, we want the robot to work with multiple patients at one time, with more natural interaction, more natural group conversation.
Undertaking essential jobs in hazardous environments
Dr Sen Wang, Associate Professor in Robotics and Autonomous Systems
When it comes to maintenance checks on offshore wind turbines and oil rigs, it’s really dangerous to use human divers in deep, dynamic and dangerous turbulent seas.
Having a robot working underwater, connected to an operator, is much safer and more efficient. The robot can send back data to monitor these assets more effectively, and identify repairs when required in a more timely way.
Underwater robots use cameras to collect visual data from underwater structures, providing the remote human operator first-hand data to understand what is going on.
The robots have other specific sensors designed for underwater, like sonar which uses sound waves to help the robot understand the environment, even in very bad visibility.
These sensors also help the robots navigate safely around the structures.
There are specific robots for specific jobs, if as pipeline inspection. The data the robots provide means we can see defects and cracks. We have machine learning algorithms to automatically detect that and relay it to the human operator, who can look in more detail.
For wind turbines, for example, you are deploying them for 20-30 years, and you need to know they are in a good condition throughout their working lifetime.
So there are safety and efficiency benefits, but also cost benefits – it’s very expensive to send out divers and boats out to do inspections.
Autonomous robots can be set out in small electric boats, which are safer because there is no human on board, and greener than large boats using oil.
Our challenge is bringing down the cost of robots. Basic inspection robots are fairly cheap, but high-end inspection robots can cost well in excess of £500,000 – but we know the costs will come down significantly over time.
We are currently carrying out inspections using robots, but the next step is those robots carrying out underwater repairs and maintenance, either autonomously or working with a remote human operator.
More generally, robotics has a big part to play in developing our wider understanding of marine biology in the deep sea environment, as our current knowledge is very limited.