In the world
🇺🇸 Engage or divest? The big debate is back for AGM season. There has been a distinct divide between US/UK and European financial groups, reports the Financial Times, with the former group defending engagement (State Street, Fidelity, Barclays). Conversely, European stalwarts HSBC and, on the corporate side, TotalEnergies have taken more decisive steps to exit fossil fuels. One thing is clear: Social and environmental resolutions aren’t going anywhere, which means the bar for corporate transparency is staying put.
🇪🇺 Article 9 is back in. Just months after having witnessed mass downgrades, the European investment industry is bracing for funds to revert from Article 8 back to 9 en masse. The Financial Times reports that ‘flip-flopping’ will begin this month, after the EU recommended a discretionary approach in lieu of its prior suggestion that fund managers be held to minimum standards. It was one in a series of sweeping reforms aimed at consolidating its markets to remain competitive with the more company- and investor-friendly US.*
🇺🇸 *Depending on where you look. The ever-growing ESG rift is growing ever greater in the US, where Florida Governor (and presidential hopeful) Ron DeSantis this week signed into law an aggressive new bill that blocks state officials from investing public money in line with environmental, social and governance considerations. On the other end of the spectrum, President Biden recently pledged $1bn to the UN Green Climate Fund, while the benefits created by his administration’s Inflation Reduction Act are beginning to surface across the US economy.
In the headlines: Garbage in
Suddenly, AI is everywhere. Thanks to the ubiquitous access provided by ChatGPT, regulators in the EU, US, UK — well, everywhere really — are grappling to address ethical questions that have long been kicked down the road, fringe headlines are flashing code red about imminent robot invasions, and ‘AI ethics’ are the topic du jour for sustainable investors across the world. The good news? Robots aren’t trying to take it over. The bad news? There are plenty of other hard-working contestants.
On two fronts, the current discourse about AI ethics misses the bigger picture.
Like most information technology, AI is an enabler. It isn’t, objectively speaking, good or bad. That much should be obvious and is usually made explicit, although not always. Instead of drawing attention to the fact that governments in its jurisdiction are and have been flouting surveillance laws, for instance, EU watchdogs have made AI the target of its rhetoric. Similarly, AI is in the hot seat for overreaching surveillance in the police force and in the private sector.
These are critical issues deserving of attention, but calling for an “immediate pause” on AI systems more powerful than GPT-4 isn’t going to do away with them. Nor will it dent the unrelenting pace of disruption to laptop-class jobs: an issue which should, in any case, have long been a focus of any competent government or corporate governance team. The socioeconomic impact of automation is hardly an original concept. If it happens to be one that dawned on you for the first time in November 2022, two words: availability bias.
This brings us to the next and more important point about AI ethics.
Besides misuse, AI stands accused of bias along lines of gender, race, wealth. Once again, these are real and critical issues deserving of attention, but they aren’t the only ones. The consumer became the product with the advent of social media; today, the consumer is the technology itself. Machine learning (ML) models are trained on human-collated images and human-generated copy — yours, ours, all of it. The models simply regurgitate, which means — if we use as an analogy the riddle about the honest person forced to repeat what a dishonest person would say — ML models are the truth tellers and we are the liars.
At the risk of offending our stratospherically intelligent ML colleagues, think of it as glorified statistics rather than a black box. If facial recognition software has an error rate of 0.8% for white men relative to 34.7% for black women, that isn’t just an indictment of whatever data has been used to train the neural networks. More powerfully than any individual statistic, it evidences the scale of institutionalised racism. (Silver lining for the Hollywood screenwriters on strike: Chat GPT won’t write critically acclaimed films that pass the Bechdel test, because you didn’t write films that pass the Bechdel test.)
All of this is absurdly pertinent to Util, given we use ML to evaluate corporate ethics.
The objectivity on which we pride our analytics derives not from the technical process itself but from the underlying text on which our models are trained: peer-reviewed research, which is less susceptible to bias than corporate disclosures, while also benefiting from breadth of scope not afforded by human analysts. And, according to those models, AI is positive for SDG4 (Quality Education), SDG8 (Decent Work & Economic Growth), and SDG9 (Industry, Innovation & Infrastructure). It promises to usher in net improvements to knowledge and productivity, the latter of which has stagnated since 2005.
One example? ESG. This week, the Financial Times reported that AI developments would change how sustainable investors evaluate: 1) tech firms, and 2) firms more broadly. You could go further; you could argue the implications are more fundamental than sector or methodology. In holding up a mirror to human abuse and prejudice in unequivocal terms, AI makes a stronger case for corporate accountability than any other proof point yet. Still, the same rules apply. ML models are only as good as the data on which they’re trained and the investor actions to which they lead.