Garbage in
Engage v. divest. ESG partisanship. Plus, do investors have the full picture on AI ethics?
In the world
đșđž Engage or divest? The big debate is back for AGM season. There has been a distinct divide between US/UK and European financial groups, reports the Financial Times, with the former group defending engagement (State Street, Fidelity, Barclays). Conversely, European stalwarts HSBC and, on the corporate side, TotalEnergies have taken more decisive steps to exit fossil fuels. One thing is clear: Social and environmental resolutions arenât going anywhere, which means the bar for corporate transparency is staying put.
đȘđș Article 9 is back in. Just months after having witnessed mass downgrades, the European investment industry is bracing for funds to revert from Article 8 back to 9 en masse. The Financial Times reports that âflip-floppingâ will begin this month, after the EU recommended a discretionary approach in lieu of its prior suggestion that fund managers be held to minimum standards. It was one in a series of sweeping reforms aimed at consolidating its markets to remain competitive with the more company- and investor-friendly US.*
đșđž *Depending on where you look. The ever-growing ESG rift is growing ever greater in the US, where Florida Governor (and presidential hopeful) Ron DeSantis this week signed into law an aggressive new bill that blocks state officials from investing public money in line with environmental, social and governance considerations. On the other end of the spectrum, President Biden recently pledged $1bn to the UN Green Climate Fund, while the benefits created by his administrationâs Inflation Reduction Act are beginning to surface across the US economy.
In the headlines: Garbage in
Suddenly, AI is everywhere. Thanks to the ubiquitous access provided by ChatGPT, regulators in the EU, US, UK â well, everywhere really â are grappling to address ethical questions that have long been kicked down the road, fringe headlines are flashing code red about imminent robot invasions, and âAI ethicsâ are the topic du jour for sustainable investors across the world. The good news? Robots arenât trying to take it over. The bad news? There are plenty of other hard-working contestants.Â
On two fronts, the current discourse about AI ethics misses the bigger picture.
Like most information technology, AI is an enabler. It isnât, objectively speaking, good or bad. That much should be obvious and is usually made explicit, although not always. Instead of drawing attention to the fact that governments in its jurisdiction are and have been flouting surveillance laws, for instance, EU watchdogs have made AI the target of its rhetoric. Similarly, AI is in the hot seat for overreaching surveillance in the police force and in the private sector.Â
These are critical issues deserving of attention, but calling for an âimmediate pauseâ on AI systems more powerful than GPT-4 isnât going to do away with them. Nor will it dent the unrelenting pace of disruption to laptop-class jobs: an issue which should, in any case, have long been a focus of any competent government or corporate governance team. The socioeconomic impact of automation is hardly an original concept. If it happens to be one that dawned on you for the first time in November 2022, two words: availability bias.
This brings us to the next and more important point about AI ethics.
Besides misuse, AI stands accused of bias along lines of gender, race, wealth. Once again, these are real and critical issues deserving of attention, but they arenât the only ones. The consumer became the product with the advent of social media; today, the consumer is the technology itself. Machine learning (ML) models are trained on human-collated images and human-generated copy â yours, ours, all of it. The models simply regurgitate, which means â if we use as an analogy the riddle about the honest person forced to repeat what a dishonest person would say â ML models are the truth tellers and we are the liars.Â
At the risk of offending our stratospherically intelligent ML colleagues, think of it as glorified statistics rather than a black box. If facial recognition software has an error rate of 0.8% for white men relative to 34.7% for black women, that isnât just an indictment of whatever data has been used to train the neural networks. More powerfully than any individual statistic, it evidences the scale of institutionalised racism. (Silver lining for the Hollywood screenwriters on strike: Chat GPT wonât write critically acclaimed films that pass the Bechdel test, because you didnât write films that pass the Bechdel test.)
All of this is absurdly pertinent to Util, given we use ML to evaluate corporate ethics.Â
The objectivity on which we pride our analytics derives not from the technical process itself but from the underlying text on which our models are trained: peer-reviewed research, which is less susceptible to bias than corporate disclosures, while also benefiting from breadth of scope not afforded by human analysts. And, according to those models, AI is positive for SDG4 (Quality Education), SDG8 (Decent Work & Economic Growth), and SDG9 (Industry, Innovation & Infrastructure). It promises to usher in net improvements to knowledge and productivity, the latter of which has stagnated since 2005.Â
One example? ESG. This week, the Financial Times reported that AI developments would change how sustainable investors evaluate: 1) tech firms, and 2) firms more broadly. You could go further; you could argue the implications are more fundamental than sector or methodology. In holding up a mirror to human abuse and prejudice in unequivocal terms, AI makes a stronger case for corporate accountability than any other proof point yet. Still, the same rules apply. ML models are only as good as the data on which theyâre trained and the investor actions to which they lead.
â