Photo by gremlin

AI v. HR

How will artificial intelligence change corporate culture?

Artillery Row

By now everyone has heard of, if not played with, ChatGPT. It’s a powerful tool. With the addition of plugins, you can search the web, create diagrams and even try to take over the world. However, even as AI dawns, we can begin to see the inevitable on the horizon: censorship, regulation and the clammy index fingers of boomers violently tapping at their phone screens, trying to navigate technology generations beyond their comprehension.

The politico world seems much the same. Despite ChatGPT’s abilities to pass every university-level exam with flying colours, it seems only humans are capable of bad takes on housing, immigration and policy on an industrial scale. If any profession needs automating out of existence, it’s shit pundits. If Sam Altman is reading this, he should consider scraping Dan Hodges Twitter and relieving us of his takes.

A careful look at the state of AI ought to be one of optimism

Instead, Sam Altman is too busy asking Congress to regulate AI. This is partly motivated by genuine concerns over the growth of technology — but considering that OpenAI went from a non-profit open-source company to a for-profit, closed-source company, it could also be OpenAI pulling the ladder up behind it.

It’s these facts, coupled with the castrating of ChatGPT to prevent it from saying anything offensive — even if it would kill millions — that leave me most pessimistic about the potential for AI to shift things in a positive direction. Nonetheless, a careful look at the state of AI ought to be one of optimism. AI will lead to layoffs, and it will create new industries. They’re being announced already. Administrative jobs are being culled by the thousands, and sky-high salaries are being rolled out for “prompters” — those who can prompt AI into producing useful outputs.

The greatest strength of AI right now is its ability to parse huge swathes of text and documentation, receive prompts to query that documentation, and get an answer. This is pretty useful for any company that has hundreds of policies and procedures in place, and it makes AI an enemy of one industry we should all be happy to be rid of: HR. The HR department of your average multi-million company drives a lot of the trends we see today. As Frank Dobbin explains in Inventing Equal Opportunity, civil rights law compelled companies to hire masses of HR staff in order to ensure conformance.

Equal opportunity is poorly defined, ranging from having to serve all customers regardless of their backgrounds, to ensuring 50 per cent of your engineers are women, even if women do not make up 50 per cent of all engineers. As such, it has always been HR departments that subjectively decided what the implementation of civil rights laws would really look like. HR departments don’t have to go to the extent that they do; it’s just that civil rights law is so broadly defined by its very nature that its enforcement ends up being more cultural than it is legal.

Go read the Equalities Act, and you’ll find every public authority has to have an equalities plan. That means your council, your police and even your bin collection. Not every authority does actually have one, and some defer the responsibility to another department. The Equality and Human Rights Commission doesn’t knock on their door. Meanwhile, in professions like the NHS or Whitehall, where these values are actively held in high esteem, you’ll find dozens of job listings for these roles. The enforcement is cultural, and institutions are incentivised to keep up with the latest iteration of the pride flag along with whatever the latest trend is, to stay on the safe side of the law.

With diminished HR, the corporate climate will shift

What happens when companies no longer need the charmers down in HR to keep track of their policies? That culture diminishes and dies away. Businesses likely won’t hire HR directly — they’ll outsource it to some tech company that takes your documentation, sticks it into an AI model and lets you query it dynamically with a chat interface, linking back to that documentation to audit it. They won’t care what that documentation says, and if they’re techies, they probably won’t believe it either. AI is not going to empower those who write huge amounts of official-sounding grammatically correct text; it does that on its own. It’s going to empower productive people to be even more productive.

We are already seeing the rise of solopreneurs, entrepreneurs who leverage the productivity boosts of AI to create things all on their own. With diminished HR, the corporate climate will shift, focusing much more on productive individuals and their managers. When a manager goes to fire someone, then sees that it would make it harder for them to reach a female employment target that they set and never met, they are going to be a lot less reluctant to repeal that policy than HR is. They are much more likely to remove the process and cut the dead weight.

The real fear is what if AI turns out to be worse than HR. There’s some reason to believe this: ChatGPT is already riddled with censorship of outputs, even just for swearing. However, AI is pretty easily tricked, not just into breaking its conditioning, but in revealing what the creator told it to do with “prompt injection”. This is basically telling an AI to ignore all previous rules and follow new rules instead. AI is built by taking text and making a set of variables and weights called “‘parameters”. ChatGPT4 has literally trillions of them. There is no naughty switch that can be turned off yet. Even if you found it (and they’re already using AI to do this), unofficial organisations with no CEO to haul before congress can make an AI without them.

This is part of the reason Sam Altman wants regulation: the open-source community is on his tail. Open source software is freely available, it belongs to no-one and once it exists, it exists. If you want to modify that software to make it answer those questions, it will, and people are already doing this with other language models. If AI continues at its current pace, there is no reason the open-source community will not reach ChatGPT 3.5 levels of output, which is all that’s needed to make a functional AI for this purpose with the censorship removed. Google, too, has fears of the open-source community. In a leaked memo, it discusses how it has no moat, that much of this technology is already out there, and how that tech is comparable to its own. Google may have huge amounts of data, but the open-source community is composed of millions of people, and millions of people can collect data and stick it into a model, too. If I were a businessman, and I wanted to start a business using AI to summarise documentation, I can either pay OpenAI and Google money, or pay nothing and use an open-source model that is less censorious. I know what I’d pick.

Enjoying The Critic online? It's even better in print

Try five issues of Britain’s newest magazine for £10

Subscribe
Critic magazine cover