Developer Speaker Series: Taylorism has entered the ChatGPT with Heidi Waterhouse
Presented by: Heidi Waterhouse
Originally aired on November 20 @ 7:00 AM - 7:30 AM EST
Scientific management says that there is a right way to do things, a best way, and that the way to make workers more efficient and standard is to enforce that one way. Large Language Models take the texts they are fed and predict the most likely thing we want next. While both scientific management and LLMs are a revolution in how work gets done, they are also a tribute to homogeneity, mediocrity, and norming. Join me for a far-ranging exploration of industrial design, standardization, and why normal and average are sometimes a deadly myth.
English
Developer Speaker Series
Transcript (Beta)
Hi, I'm Heidi Waterhouse. I'm here today to talk to you about LLMs, AI, and what we can learn from history when we're talking about them.
Let's start off with 1830. In 1830, a man invented a sewing machine.
It wasn't a very good sewing machine. It could really only do like chain stitch, which is kind of fragile, but he thought it would be pretty amazing if the French army didn't have to have hand-sewn uniforms.
However, when he put together the factory to do this, the tailors of Paris rioted and burned down the factory with him inside it.
He only barely escaped.
Why did they do that? Well, if you were a tailor who sewed uniforms and someone came along and said, I'm going to be able to do that with a machine, it would be pretty terrifying.
This picture that I've put in is not of the sewing machine riot.
Rather, it's a picture of about 20 years later, the Luddite revolution.
These were people who looked at their skilled labor and looked at the automation that was coming to take it over and did not like their odds.
And they felt that possibly by destroying the machines, they could retain their skilled labor jobs and be able to keep on doing what they'd been trained to do.
It was complicated, hard and rewarding work.
What is Taylorism besides something that I put in the beginning of the talk?
Taylorism is the start of what we also call scientific management.
And you've encountered scientific management many times in your life, although you may not have known it by that name.
Scientific management led us to all sorts of branches on the tree, including things like Six Sigma and Agile Lean.
What Taylor was doing was saying, maybe there's a way that not everyone has to be a creative craftsman to be productive.
He himself phrased it this way.
Knowing exactly what you want men to do, and then seeing that they do it in the best and cheapest way.
Who could argue with that? Who wants to have people who are wasting time or not doing something useful?
We worry a lot about productivity. And a large part of that has to do with Frederick Winslow Taylor's conception of what work should look like and how it should go.
Taylorism includes the following tenets.
He says that there is one best way to do something, and that in any process, you should find the best way to do it.
Teach everyone to do it, break it down into very small, simple steps, and then make everyone do it.
And if someone is exceptionally good at it, you should pay them more.
And if they're bad at it, or slow, you should pay them less.
Again, this seems really intuitive to us, because this is how we have always thought of work.
If you produce more widgets, you get paid more, right?
Or you get an hourly wage, but maybe you get promoted faster than the person who produces fewer widgets.
The problem with this is that one best way isn't the best for everyone.
And maybe that process that we've broken down and digested loses some nuance and gives us some problems down the line.
Taylorism also assumes that humans are basically fungible.
They're interchangeable. There's no difference in a human doing work anywhere in the world for any wage, as long as you can adequately describe what they're doing, so that it's cheap and easy.
As you can imagine, this is what led to a lot of offshoring, and still does. If you can make it simple enough that you don't need skills to perform a task, there's no reason to have skilled craftspeople perform the task.
So Taylorism assumes that all tasks could essentially be deconstructed to something simple enough for pretty much anyone to do.
And the last thing that I want to fight with Taylorism about is the norm, the average.
A man named Todd Rose wrote a really interesting book called The Death of Average.
And in it, he tells a story about 1950s U.S.
Air Force. We were working on the sound barrier. We were really getting our feet under us in terms of jet fighters and jet planes.
And since the dawn of the Air Force, we'd been selecting guys, let's be real, who were pretty much average-sized and putting them in cockpits and asking them to operate planes.
And this seemed perfectly logical. But jets, they move faster than propeller planes, and there's less margin for error.
And that started showing up. There started being more and more pilot-caused accidents.
And the pilots pushed back and said, we're not worse pilots.
Something is going wrong. Something is wrong with something.
So they get a pencil-necked geek in.
And he says, I think it has to do with what size the pilots are.
And the Air Force scoffed at him because they were average -sized pilots.
Like, how hard can it be? You pick a guy who's, you know, medium -sized, and you put him in a cockpit, and he can do the things.
He went through and measured 10 crucial parts of every pilot.
Things like how long their thumb was, and how long their arms were, and how broad they were around the chest and the waist and the hips, how long their legs were.
And he compared that to the average that the Air Force was building cockpits for.
And it turns out, of the 4,036 pilots he measured, not a single one of them met that average.
They might be average height, but it was all length in their torso.
Or they might have a bigger neck, and so the helmets didn't fit right, and so it obstructed their vision.
He found that not only were all 10 measurements off, that any three measurements, only about 4% of pilots would have all three measurements in the average range.
We're not average.
None of us are average. The way the Air Force fixed this, and they did it over a lot of howling from airplane manufacturers, was to put in adjustable seats.
Something that you and I take for granted every time we sit down in the car and think, ugh, my spouse drove it last, and push the magic button to make it go to our settings.
That's all of these measurements that disprove average. So when we're talking about Taylorism and the average and the one best way, I think it's really important to remember that that doesn't really exist.
That we're using it as an erroneous shorthand.
It's like the mean of a billionaire and a person who works at McDonald's is still very, very rich.
Well, yeah, but that doesn't mean that both of them are just very, very rich.
It means that there's a huge disparity between them.
So let's think about that and how that applies to Taylorism as we go into the next section.
Why do we care now about this history, about Taylor's riots in Paris and the US Air Force in the 50s?
Why does it matter to us now? We're about to face another change in how we understand work.
And I think it's going to be very useful to us if we grasp what that's going to mean, at least a little bit.
We, the workers, feel very threatened by automation.
When I was looking for images for this, I found so many articles about how labor and automation frequently find themselves at loggerheads.
One of the really encouraging articles I found was how the dock workers who used to unload ships by hand confronted containerization and said, we, the workers, need to have a hand, have a stake in this automation so that we can embrace it and we can reap some of the rewards rather than all of that money going to management and owners and us being out of jobs.
So a lot of jobs were eliminated when we moved to containerization.
But the dock workers didn't suffer as much as many other unions have because they embraced it and made it part of what they accept.
When we're talking about the writer's strike that's going on right now, something that the writers are legitimately worried about is AI, which is not artificial intelligence, but we're all shorthanding it that way.
LLMs taking their jobs and being like never as funny, never as surprising as a writer, but a lot cheaper and acceptable enough for the audience, the consuming public.
This is really the last big writer's strike is how we got reality television.
So I have to ask you, is this what we want to have happen again is that we accept the end of a lot of really funny, amazing scripted things in favor of some producers put some stuff together.
The dream of automation is that we could take jobs that are dirty, dangerous or demeaning and give them to robots, which are not demeaned or care about getting dirty.
This picture is from the Nashville sanitation workers strike, which is what MLK Jr.
was working on when he was assassinated, and these workers got together and unionized after two of their members were killed in a garbage truck.
And what they were saying was, we are doing dirty, dangerous and demeaning work, and we deserve to get paid for it.
We wish we could replace all of those jobs.
But one of the things that we have learned during the pandemic is that care work is some of the hardest work to automate and some of the least well compensated.
If you think about what we earn as software people and what somebody earns to keep our grandmothers safe and healthy, there's a lot of disparity there.
And it's probably a problem. And we would love to make it so that a robot could change diapers.
But that's a lot harder than making it so that a robot could scrape up a bunch of text and regurgitate a bunch of text.
It's a thing that we really need to think about and worry about.
So I'm not saying that automation is going to save us from everything.
It obviously isn't because we've been trying for a long time, and our Roombas are still getting stuck under things.
None of us can see the end state of where we're going with LLMs and AI and predictive everything.
Are we going to end up living a sort of Jetsons life where nobody has terrible jobs, and machines do a lot of our boring work?
Or are we going to have the kind of world where machines tell us what we should be doing?
And we do it because hands are easier in so many ways than any kind of robot attachment.
I was reading an interesting article about people who are working with AI and LLMs as classifiers.
And they're getting paid $15 an hour to look at things. And it's basically like imagine if your entire job was CAPTCHAs.
Is this a stop sign? Is this a human?
Is this a bicycle? And not just for driving, for everything. Like, is this a cat?
Is this a beagle or a bagel? Humans are so much better at identifying that, that we have to have those humans in the system in order to make all of these apparently magical AI-like things work.
And that job is probably not going to go away anytime soon.
But we're not paying much for it. We don't know what the end state of this is going to be.
We just know that we're moving really fast right now. And we should think about it.
So here are some things that I want you to remember to take away from this.
Efficiency is crucial to getting things done. I'm not saying we shouldn't be efficient.
Of course we should. We shouldn't waste time or material or energy.
We should conserve everything that we can for ourselves, for happiness minutes, for the planet.
But perfect efficiency is very rigid. And rigid things are brittle.
And so we need to remember to leave a little slack in the system so that we aren't running right up against the edges of what breaks the system.
Innovation is important, but expensive. This is the James Webb Space Telescope.
And this is the last picture we got of it before it went out and took a bunch of pictures, which are, by the way, amazing.
It costs $10 billion to do this. $10 billion.
We could have done a lot with $10 billion on like malaria eradication. But it's not trading off like two goods against each other.
Innovation is important and we need to understand our galaxy so that we know where we fit in it.
And so when we think about this, we need to remember that sometimes we're making bets with a lot of money, and they're not always going to pay off in the immediate future.
And some of them aren't ever going to pay off. But we still need to make them.
There isn't always a best way to do things. You may have to adjust it. These are custom skate boots.
You can get these for starting at $1,000. I would bet these are more because they're suede.
But starting at $1,000, you can get boots custom fitted to your feet, which doesn't matter when you're wearing all birds to walk around Soma.
And it doesn't really matter when you're wearing house slippers to schlep around the house and take the garbage out.
But it is absolutely vital when you are an Olympic level skater, or even like a regional level skater.
Having something that pinches your toes when you're trying to take off means you can't take off as well.
At the same time Frederick Winslow Taylor was working, so were the Gilbreths, Frank and Lillian.
And Lillian Gilbreth is responsible for a lot of nice things in your life that you don't really think about.
Like the step trash can, that was Lillian.
And the work triangle kitchen, so that you don't have to walk like literally a mile to bake a cake.
That was Lillian. And one of the things that she said about kitchens was, you should measure the person using the kitchen and make the counter the right height.
There are a lot of tall people in the audience, making sort of a sad, nodding face because when they use a kitchen counter, it's a standard height.
She didn't win this argument. Kitchen counters come in kitchen counter height.
And tall people find it difficult to stoop over them for long periods, and short people find it a little difficult and dangerous to reach up to them and work higher than their comfortable place.
It's not the best way to work.
Sometimes there isn't a one best way. Sometimes we need to customize for what it is we want.
So if this talk was too long and you read Twitter or your email or Blue Sky or Mastodon, it's been a long year, man.
I want you to remember that technology is not the enemy.
It's just a lever. Where we're standing and how we're trying to move the world is what matters to how this works out.
So keep in mind that levers move the world or destroy the machines, and you're the one holding on to it.
Thank you.