Menu
Aeon
DonateNewsletter
SIGN IN

At a Best Genetics Group pig-breeding farm in Chifeng, China; 27 February 2022. Photo courtesy Tingshu Wang/Reuters

i

The dangers of AI farming

AI could lead to new ways for people to abuse animals for financial gain. That’s why we need strong ethical guidelines

by Virginie Simoneau-Gilbert & Jonathan Birch + BIO

At a Best Genetics Group pig-breeding farm in Chifeng, China; 27 February 2022. Photo courtesy Tingshu Wang/Reuters

Imagine a long black shipping container, packed with living animals. You tip in some human food waste and walk away. AI does the rest, controlling feeding and growth ‘so the farmer does not have to’, as the company blurb puts it. What are these animals inside – your animals? It’s not important. You don’t need to know anything about them or have any experience handling them. If problems arise, engineers can troubleshoot them remotely. And when it’s time for ‘harvesting’, no need for a slaughterhouse: AI handles that too. The animals live and die in a literal black box, only leaving as a ready-made product.

The future of farming? No, the present: this is a description of the ‘X1’ insect farm developed by the UK startup Better Origin. Of course, the farming of large animals, like pigs, chickens and fishes, is usually a lot less high-tech than this. Farms are not yet fully automated. But with the technology advancing rapidly, trends towards increasing automation are clear to see.

How much do we want AI to be involved in farming? The time for that conversation is now, before these trends are irreversibly locked in. Now is the time to set reasonable ethical limits.

What is AI used for now? Several different applications are starting to gain traction. All share the same basic vision of placing AI at the centre of a control network, using it to intelligently manage the data that flows in from an array of automated sensors. The sensors may be placed on various animal body parts and track body temperature, respiration, heart rate, sound, even rectal temperature and bowel movements. Other sensors monitor activities such as grazing, ruminating, feeding and drinking, picking up signs of lameness or aggression. Smart ear-tags allow farmers to recognise animals individually and are sold on the promise of more personalised care. AI can crunch the readings, images and sounds to diagnose health problems and predict whether they are likely to get better or worse. Meanwhile, other AI products monitor and control environmental factors, such as temperature and carbon dioxide levels. These tools aim to predict and prevent disease outbreaks, with a special focus on dangerous diseases like African swine fever. GPS trackers put on animals and satellite images provide real-time location information. This information, when handled by AI, allows farmers to predict their cows’ grazing behaviour, manage their pastures, and maintain soil vitality.

Put like this, these new developments may sound like great news for animal welfare. Indeed, we want to present the case for AI optimism as charitably as we can – before turning to the problems. The optimists’ argument is simple. Farmed animals are sentient beings, capable of feeling pleasure and pain. Their wellbeing matters, and it can be positively or negatively impacted by the way we treat them. Yet traditional, AI-unassisted farming systematically misses many welfare problems because human detection is not vigilant enough. AI takes vigilance to the next level, helping farmers give their animals good lives. In the dairy and beef industry, automated sensors could spare cattle from undergoing intrusive and unpleasant interventions at the hands of humans, like body temperature measurement. Real-time location systems could allow them to graze and explore their environment more freely instead of living at the end of a tether. In the poultry and pork industries, AI could help ensure that the average chicken or pig is well fed and has enough water. Individual health monitoring tools could also enable farmers to take care of sick or injured animals quickly or euthanise those in pain. Environmental sensors designed to predict disease outbreaks would indirectly prevent the suffering and early death of many animals. And all this can be sold to farmers as an investment that is economically beneficial, since high levels of death and disease are bad for business (think of how a disease epidemic can rip through a flock of birds or a herd of pigs, destroying profit margins along with lives). Defenders of animal welfare should support investment in agricultural AI, say the optimists.

Are they right? Some of these benefits are probably overhyped. Claims that a new era of personalised AI care for individual animals is just around the corner should certainly be viewed with scepticism. On broiler farms, which farm chickens for meat, chickens are slaughtered by six weeks, whereas turkeys and pigs are usually killed by the age of five or six months. It is hard to imagine individualised AI-assisted care taking off in industries in which the individuals are so quickly replaced, and even harder to envisage this in fish farming. AI products in these industries will monitor large groups, tracking averages. In the dairy and beef industries, in which animals are raised or kept for several years, providing tailored care to individuals may be more plausible.

The optimists’ claim that animal welfare goals and business goals are in alignment looks incredibly dubious

More fundamentally, it’s crucial to look not only at the immediate, short-term selling points of AI in animal agriculture. We also need to think about the foreseeable long-term consequences. Farming is all about trade-offs: farmers care about animal welfare, but they also need to maintain a viable business in a competitive market, leading to compromises. Intensive farming, called ‘factory farming’ by critics, already involves compromises that are a widespread source of ethical concern, and we need to think about the potential of AI to exacerbate many existing problems.

We should think, in particular, about the kinds of farming AI can integrate with best. What sort of system will AI most help to make more profitable? In the case of broiler chickens, evidence suggests that cage-based systems are worse for welfare than large indoor barns, which are in turn worse than free-range systems. Yet cage-based systems are likely to benefit most from automated welfare monitoring. Currently, sick, injured and dead broilers usually have to be identified by manual inspection, a constraint considered ‘time-consuming and laborious’ within the industry. In a ‘stacked-cage’ system, where four tiers of cages are stacked on top of each other, these inspections can even be dangerous for workers, who must climb to the top, all the while inhaling the ammonia-rich, foul-smelling atmosphere. It’s no surprise to see that manufacturers of stacked-cage systems are already advertising the benefits of shifting to ‘high-tech poultry cages’ equipped with monitoring and control systems for feeding, watering and (for laying hens) egg collection. AI can collect data in real time, analyse it, detect health issues, and make predictions about the flock’s overall ‘productivity’.

Once you see this, it becomes harder to be optimistic about the alleged welfare benefits of AI. The optimists’ claim that animal welfare goals and business goals are in alignment (so that systems primarily designed to boost efficiency will, at the same time, drive up welfare) starts to look incredibly dubious. Yes, the welfare of individual animals within cage-based systems might improve, relative to the horrendous status quo, if their health is monitored by AI. But these inherently low-welfare systems may take over a larger and larger share of the market, as AI turbocharges their economic efficiency in multiple ways: reducing unwanted mortality, controlling disease outbreaks, and enabling corporations to hire fewer employees and give them less training. The result would surely be a decline in the welfare of the average farmed animal. The scope for a global race to the bottom on welfare, as the competitive advantage of the lowest-welfare systems becomes ever greater, is easy to see.

Might the risk be mitigated by tough animal welfare laws? That is more plausible in some countries than others. In the European Union, there are legal limits on stocking densities (the number or weight of animals per unit of space), and much talk about the idea of banning cage-based systems, yet progress seems to have stalled recently in the face of aggressive industry lobbying. In other countries, the development of AI could allow corporations to leave the conditions in which animals are raised largely unaddressed. In the United States, for instance, there is no federal law limiting stocking densities, even though figures from 2017 show that 99 per cent of farmed animals are kept in industrial farms. Similarly, Canada has no federal regulations directly mandating the humane treatment of farmed animals, although the federal government and provinces have broader animal cruelty laws. And China, a major driver of the surging interest in AI-assisted farming, has some of the world’s weakest animal welfare laws.

An egg-production facility in Turkey. The lowest-welfare farming systems are also those most easily integrated with AI monitoring. Photo courtesy We Animals Media

Our focus, so far, has been on the risks that AI-assisted farming poses to farmed animals. This was a deliberate choice: we think the interests of the animals themselves often get forgotten in these discussions, when they should be at the centre. But we should not forget the interests of farmers. In the age of AI, we can expect farmers to have less and less autonomy over their own farms. AI will maintain crucial parameters, like temperature or humidity, within certain ranges, but who will control these ranges? If the goals and parameters are set remotely by company bosses, there is a risk of eroding the dignity of the farming profession, turning humans into mere instruments of corporations.

At the same time as driving up stocking densities, we can expect AI to lead, as in other industries, to fewer and fewer jobs for human workers. Moreover, the nature of these jobs is likely to change for the worse. One of the deepest threats posed by AI is the way it may distort the relationship between farmers and the animals in their care. AI technologies, in effect, are sold as a way of outsourcing caring responsibilities traditionally fulfilled by humans. But can a duty of care be outsourced to a machine?

Care is a relation between two sentient beings: a carer and a recipient. It is not a relation between an animal and a machine: this is, at best, a simulacrum of care. The animals in our care are vulnerable: they rely on us for food, water and shelter. To truly care for them, we might need to cultivate empathy for them. To do this, we need to interact with them as individuals, come to know their individual capacities and personalities, gain some insight into their emotional lives, and become sensitive to their welfare needs. Now, even traditional, pastoral farming often fails to live up to this idyllic image, and modern intensive farming has already moved a long way from that. But, by introducing yet more distance between farmers and their animals, AI threatens to make genuine care even more difficult to achieve.

AI opens up new ways for people to use animals as mere means for financial ends

A critic may fire back: this way of thinking about care is ethically dubious. Caring relationships, they might argue, are valuable only because of the good consequences they bring about. In other words, they are instrumentally valuable. For example, feeling empathy for farmed animals may allow farmers to be more attentive to their suffering and act more quickly to alleviate their pain. This could be good for both the animals, who would feel less pain, and the farmers, who would feel a greater sense of dignity and pride in their work. But if AI monitoring can generate the same consequences without direct caring relationships between farmers and their animals, says the critic, we should not worry about the loss of those relationships. This debate hinges on some of the deepest disagreements in animal ethics: utilitarians are likely to side with our imagined critics, whereas those of us sympathetic to care ethics will tend to see caring relationships as valuable in themselves, even if the same consequences could be produced another way.

We don’t think using AI to take care of animals is problematic in all possible circumstances. Imagine a high-tech animal sanctuary, with no goal other than to care for animals as well as possible. In this imaginary sanctuary of the future, AI is only ever used to facilitate caring relationships between people and other animals, never to replace them. Residents roam free but are tagged with collars. The collars track their location and allow individual recognition and care. Meanwhile, AI analyses livestreams from CCTV cameras, monitoring for signs of bullying, aggression and poor health, all while optimising the animals’ food and water intake, and administering individualised doses of medication where needed. Welfare is always the priority – there is never any need to compromise with economic goals. Would it still be wrong to use AI to monitor for emerging welfare risks?

On the whole, we think not. Some interventions, such as rectal sensors, might still be too extreme. Proponents of animal rights might argue that such sensors fail to respect the animals’ right to bodily integrity. But purely external monitoring seems less problematic. Admittedly, concerns about privacy may remain. Think here of a ‘human sanctuary’, where humans must put up with monitoring of their every movement: this would lead to some privacy concerns. Yet it is not obvious that nonhuman animals have an interest in privacy. It may be that how they appear in the eyes of human observers – or AI – is of no concern to them, and it’s not clear why their flourishing would depend on not being watched.

This thought experiment suggests that the ethical problems in this area are not intrinsic to AI. The problem is rather that AI opens up new ways for people to use animals as mere means for financial ends, failing to respect their interests or inherent value, and the duties we have towards them. AI risks locking in and exacerbating a tendency to see farmed animals instrumentally – as units to be processed – rather than as sentient beings with lives of their own, in need of our care. This is the likely result when AI is put to work in service of greater economic efficiency, unchecked by ethical constraints. But AI doesn’t have to be used that way.

Let’s return to the present moment. How should governments regulate the use of AI in farming right now? One option is to ban it completely, largely pre-emptively. But while this may sound appealing, it would lead to serious, probably insurmountable difficulties on the ground. It would require legal definitions of what counts as ‘AI-assisted farming’, as opposed to just assistance by regular computers, which will gradually come to have more and more AI products installed on them. It’s hard to imagine a realistically enforceable ban that targets only the products with possible farming applications, leaving everything else intact. The AI genie is out of the bottle.

A more realistic way forward is to come up with a code of practice for this emerging industry – a set of ethical principles tailored to farming applications. Here, too, there are pitfalls. Recent years have seen many attempts to draw up ethical principles for the AI sector as a whole. Yet principles aiming to cover all uses of AI are extremely high level and vague, allowing the industry to claim it takes ethics seriously while, by and large, continuing to act as it wishes. For example, an EU working group proposed in 2019 that AI systems ‘should take into account the environment, including other living beings’, but this is so broad it implies no meaningful limits at all on the use of AI in farming. A review of 22 sets of AI ethics guidelines concluded – brutally – that AI ethics, so far, ‘mainly serves as a marketing strategy’.

We need to do better. We need goldilocks principles for AI in farming: detailed enough to provide real ethical constraints and to steer the sector in the right direction, yet still general enough to cover a wide range of future applications of the technology. The goal should be a set of principles strong enough to ensure AI is used in a way that improves rather than erodes animal welfare standards. We don’t claim to have all the answers, but we do want to make four proposals to get the discussion started.

Principle 1: Advances due to AI must not be used as a reason to increase maximum stocking densities, and must not be allowed to drive a shift towards greater use of cage-based intensive systems.

As we noted earlier, AI assistance is already helping companies using cage-based methods to increase their efficiency and reduce their reliance on human labour. This raises the spectre of these inherently low-welfare methods forming an ever-larger share of the global market. Responsible development of AI in farming must take a clear stand in opposition to this grim prospect.

We must be able to hold companies to account if they fail to act on welfare problems detected by their own systems

Principle 2: When AI systems monitor welfare problems, data about how many problems are being detected, what the problems are, and what is being done about them must be made freely available.

‘Transparency’ is a major theme of AI ethics guidelines, but it can mean many things, some more helpful than others. Merely stating on a label that AI has been involved in the production process says very little. Meaningful transparency – the kind we advocate – is achieved when the public can access key facts about the welfare problems AI is actually detecting and how they are being dealt with.

Principle 3: Companies should be held to account for welfare problems that are detected by their AI systems but not investigated or treated. Companies must not be allowed to dial down the sensitivity of welfare risk sensors to reduce false alarms.

Some welfare problems are more costly, in economic terms, than others. An avian flu outbreak could be extremely costly, whereas lameness in a single chicken will cost little. Part of the economic potential of AI detection systems is that they can take this into account. They allow the user a degree of control over their performance parameters, especially their sensitivity (how many cases they detect vs how many they miss) and their specificity (how often false alarms are triggered). Accordingly, they will allow companies to be hypervigilant about the costliest risks while remaining more relaxed about less costly problems.

Is this a good thing? Not in the absence of meaningful transparency and accountability. Suppose a company finds it unprofitable to treat some health problem, such as keel-bone fractures in hens. They tell regulators and the public, correctly, that they have a state-of-the-art AI system monitoring for that problem. What they don’t say is that, to cut costs, they have dialled the system’s sensitivity right down and only ever treat the most severe cases. We need to be able to hold companies to account if they fail to act on welfare problems detected by their own systems, and they need to be prevented from dialling down the sensitivity of their detectors.

Our last proposal is intended to protect the dignity and autonomy of farmers:

Principle 4: AI technologies should not be used to take autonomy and decision-making power away from frontline farmers. Decisions currently under the farmer’s control should remain under their control.

Imagine, then, a world where Principles 1-4 are adopted and enforced. Would it be an ideal world? Of course not: many problems would remain. In an ideal world, we would relate to other animals very differently, and might not raise them for food at all. But in our far-from-ideal real world, our proposals would at least make it a whole lot harder to use AI to drive down welfare standards. With these principles in place, AI might even be a friend of animal welfare – raising public awareness of welfare issues rather than hiding them in black boxes, and increasing the accountability of farming companies for the welfare problems they create.