Press "Enter" to skip to content

AI is Overhyped as an Investment, Will Only Worsen Inequality (Season 5, Episode 14)

Last updated on June 7, 2023

Feat Simon Johnson, co-author, ‘Power and Progress, Our Thousand-Year Struggle Over Technology and Prosperity’

MIT Economist Simon Johnson joins the podcast to discuss his book, ‘Power and Progress, Our Thousand-Year Struggle Over Technology and Prosperity’ and specifically why artificial intelligence is likely overhyped, and that not just from an investment perspective.

Content Highlights

  • Artificial intelligence’s default trajectory is about machine intelligence, code for replacing people with machines. Historically, this has not brought great things (1:21);
  • Indeed technological progress has not always benefited everybody, but just a small group at the top. Today’s advancements are no different (5:11);
  • Don’t expect a boost to business either. Productivity gains should be limited (9:17);
  • Some of the dangers of Chat GPT and Google’s Bard that is driving most business development around AI (20:12);
  • Background on the guest (24:26);
  • Thoughts on the current market environment and its potential to spill over into crisis (27:43).

More From Simon Johnson

Quick Highlights From Our YouTube Channel

Transcript

Nathaniel E. Baker 0:35
Simon Johnson, you have written a book here with Daron Acemoglu, I believe I pronounced his name correctly. Was just gonna say I’m glad it’s you and not him because his name is hard to pronounce. But the book here is called ‘Power and Progress, Our 1000-year Struggle over Technology and Prosperity.’ And I read through this and there is something here that really jumped out at me. And it’s about AI, artificial intelligence. And we hear a lot about this nowadays, chat GPT, all this stuff, how AI is going to be this great opportunity, not just an investing opportunity, but an opportunity to enhance productivity, do a whole bunch of other things that are going to be great for humanity, great for corporations, if I dare say, and things like that, and you don’t quite buy it. Tell me why.

Simon Johnson 1:31
I think, Nathaniel, the core problem in our perspective is that the the default trajectory we’re on the one that’s driven by the visionaries of Silicon Valley, let’s call them is one in which the emphasis is on machine intelligence, which is sort of a little bit of code for replacing people with machines. And when we look back over 1000 years of history, I mean, the book is basically the backstory of the generative AI moment, which is today, when we look back over 1000 years episodes when people have been replaced by machines have been much less favorable in their outcomes for most people than moments where we figured out how to use machines to make people more productive. Now, that’s the key distinction that we need to focus on today.

Nathaniel E. Baker 2:12
got it. Okay. And I wanted to read something here. From you. This is actually from the bibliography it says, “AI is likely to generate more limited productivity benefits than many of us enthusiasts hope, because it is expanding into tasks where machine capabilities are still quite limited. And because human productivity builds on tacit knowledge, accumulated experience, and social intelligence.” Talk to me about that angle.

Simon Johnson 2:39
Yes, this is the idea of so automation, which Daron and his co author Pascal Restrepo have introduced in a series of papers. So a good example would be self checkout kiosks or grocery stores where the work has been transferred on to the customer. In part, they’re okay, but they’re really not great. And the productivity of workers has not gone up the app, the marginal worker, the additional work you might hire, effect on profits also seems to be pretty, pretty low actually, in the self checkout world. So mostly, what they’ve done is sort of created more aggravation for customers that they have to put up with no signs of like a major productivity breakthrough, for example of the kind that we got when Henry Ford automated the factory line and for making cars.

Nathaniel E. Baker 3:24
Okay, so in what ways are is the existing AI limited for that to thank? Or do you know? And what are your ideas for how it can be better?

Simon Johnson 3:34
Well, I think this problem, the thing is replacing people completely, but that’s, that’s the vision, right? So you’re saying that we don’t want customer representatives, we want an AI to do that instead? It’s pretty difficult right now. And also, you don’t just want platitudes. When you call customer service, right? You want somebody or something that that can solve problems that can mobilize resources that can actually solve your problem. And usually it involves actually getting through to someone with sufficient responsibility. And that’s can be extremely aggravating. We would rather support the vision there’s there’s been there in the development of computers, but it’s not the ascendant vision right now, which is used the machines and the algorithms to enhance individual capabilities, enhance what people can do, just like we’re using zoom right now to enhance our ability to communicate and our ability to see each other. We could have talked on the phone before zoom came along. Now we’re using zoo it’s much better. I think, for this kind of conversation. Let’s look for opportunities to enhance individual characteristics and what people can do using AI rather than replacing people.

Nathaniel E. Baker 4:41
Yeah, okay, fair enough. And I don’t want to draw away too much from the broader topic of the book, which is about inequality. And how progress or what we conceive as progress has not always resulted in benefits for everybody. And in fact, all I’ll throw in another quote here. I’ll read another one here. We wrote this book to show that progress is never automatic. Today’s progress is once again, enriching a small group of entrepreneurs, and investors, whereas most people are disempowered and benefit little. And an AI. Like I said, it is only just a chapter in this, I jumped out at me as one because it’s so topical nowadays. But broadly, more broadly speaking, you think that this, these advances that we’ve made in technology, they don’t necessarily benefit everybody? In fact, they can they can be a net detriment to most

Simon Johnson 5:38
Yeah, absolutely. I think that that’s the pattern of the last 1000 years where we’ve had this, this long trajectory of technology, improving productivity improving, so you can say, well, what’s there to complain about, but actually, when you when you when you look at the episodes more closely, there was always a struggle to see who would benefit. And in some of those struggles, including medieval times, including the early Industrial Revolution, there’s a huge amount of innovation, and very few people prospered. Now, in the 20th century went much better, there was much more sharing of the gains for productive increases, but after 1980, so 40 years ago, that sharing really receded, became much less salient. And we’ve had a widening of inequality in wages and incomes. And many people who haven’t finished college, for example, in United States have not gained in terms of real wages over the past 40 years. So that’s kind of dramatic, considering that we continue to invent innovate, we’ve had the whole digital revolution, we’ve had a lot more pressure on companies, supposedly, to become more efficient. Were those gains gone, because they were real gains, they’ve gone to a few people.

Nathaniel E. Baker 6:38
So what is that? What is a potential outcome that can happen of this? What do you think needs to happen? What do you think will happen if anything?

Simon Johnson 6:46
Well, I think on our current trajectory, which is sort of put the blinders on charge straight ahead, try to replicate what humans can do that the so called machine intelligence approach, I think we’re in for a rough ride, I think we’re going to be replacing intellectual labor, just like we replaced human labor early in the industrial revolution. And if that happens, with big increases in productivity through automation, there can be still a positive spillover effects on the rest of the economy, and the other parts can grow. But if what you’re doing is really just replacing the people shifting the balance of power, if you like, between capital and labor, and you’re doing it really fast, which is what the whole GPT line of generative AI seems to promise, then it’s gonna be very hard to generate enough new productive tasks so that people who are either displaced or they never get an opportunity when they graduate from college, school. And we could be talking about a much bigger hit to our labor market than we’ve seen in long time. What happens after that? Well, I mean, you’ll guess, as good as mine, but those those sorts of hits don’t usually lead to good political outcomes.

Nathaniel E. Baker 7:48
And again, there is historical precedent for this where new technology has caused there to be a lot of unemployment, and things like that. And like you said, you see that the more recent advances in technology are more more of the type that we saw, several 100 years ago, isn’t that what we saw in the 20th century, and the 20th century had plenty of hardship as well,

Simon Johnson 8:09
for sure, it’s true, that’s true, well, that the Great Depression was was mostly not about technological unemployment. And when we’ve had disruptions in historically, it’s mostly in place like the United States cause low wages, so people get a job. But you know, they’re driven to fast food or other low productivity activities where their wages are lower. So we haven’t experienced mass unemployment after 1980. Not on a secular basis, but we have experienced a big widening of income differentials. So that widening you know, I’d say on our current course, is set to to get become even larger.

Nathaniel E. Baker 8:44
And importantly, here, you also say that the, the businesses won’t even necessarily benefit from this, at least from from Ai, right, like the productivity and such. Right?

Simon Johnson 8:55
Yes, base based on what we’ve seen so far, I think the productivity gains may be quite limited. Companies may still adopt it, because it’s fashionable. And also because it tilts the balance of power in their view away from labor. So it makes labor more more compliant. But the the idea that there’s going to be major breakthroughs, like we saw in the early 20th century with mess, manufacturing, that right now does not seem to be imminent.

Nathaniel E. Baker 9:18
Okay. So what is the remedy here for all this, this this disconnect here between, I guess, power and technology and the inequality that we’re seeing? Like, do you have anything to propose?

Simon Johnson 9:31
Yeah, we have three proposals that we’re the leading with right now. One is what and all three are based on expertise of other people to get to be clear, we really deeply thought about these things, but we think we can pull them all together in one package, Jaron Lanier that computer science is arguing for data dignity, so that we should own our data. And we should come together in various forms of agglomerations or consumer unions, and use that as leverage visa vie the big data companies, the AI companies, I think that’s very interesting variable Wouldn’t and without that probably other good things can’t happen. The second idea is to really put very strong safeguards around surveillance. This idea is preeminent in Shoshana, Zubov book, The Age of surveillance capitalism. And I think she’s absolutely right. There’s a lot of bipartisan support also for constraining surveillance, not however support for that idea in China. And that may be the new split the new global split baby between people who are very careful and constraint surveillance, and other countries and leaders who go all in on surveillance, the authoritarian countries of the world. And I think we may just have to recognize that. And the third piece comes from Kim crossings who is a professor at UCLA law school, former senior Treasury officials, who proposes that we have a graduated corporate income tax tax on profits based on how much profit you make. So it’s based on the level of profit, like income tax is or should be. Right now we have a graduated system, but it goes the other ways of small companies may pay more than big companies, big companies can hide their profits, we need to flip that and say, if you’re over, for example, $10 billion in annual profits, and you pay a tax rate of 35%. But if you if you lower in terms of annual profits in one company, you pay a lower rate. And of course, what that would do is give the companies an incentive to break themselves up, because the shareholders are gonna be saying, Hey, why are we paying 35%? Well, we could be paying 21%, let’s create some value for the shareholders. And this would also be good for management actually create more opportunity, what we’re pushing for there is plurality of business models. So don’t just have, for example, two companies control the entire AI space. I don’t think you can get there through regulation all through the court. So it’s gonna take a very long time through the courts, which you can get there through the corporate tax system. I think that’s a that’s definitely something we should we should we should propose and push forward

Nathaniel E. Baker 11:39
Okay. I mean, it’s not a bit of a non starter, especially considering that, you know, let’s face it, these political parties are bankrolled by corporations?

Simon Johnson 11:47
Well, it’s it’s an uphill struggle, of course, but it’s always the case in any of these with any of these policy questions. And I think what we’re going to see with AI is such a concentration of power, and such a concentration of profit, that people are gonna be looking, they’re saying, Wow, wait a minute, why did these guys have all the profit, and it’s not elsewhere, in even the business ecosystem, other companies are going to get squeezed. So within the corporate sector, I think there will be support for this. And we’re not this is not confiscated re visa vie business, it’s saying the largest companies, with the largest profits should pay a higher corporate tax. But hey, if you want to make yourself smaller, you can go back down to where the corporate tax rate was, in the past. Absolutely no problem about that. It’s regulation through taxation, encouraging competition, which all the private sector claims that they like,

Nathaniel E. Baker 12:35
yes, until they doubt what I’m wondering if you have any thoughts here, this has come up before on this podcast, with other book authors and academics. We talk about the and we’ve talked a lot about antitrust, and how it’s basically, it was originally intended as a tool to kind of accomplish exactly what you’re talking about, or at least prevent the opposite from occurring. And that’s kind of not been a case recently, if you look at the numerous mergers that have happened. Do you have any thoughts on that? Or is it outside your area of expertise?

Simon Johnson 13:08
But we thought a lot about antitrust in this context. And more broadly, I think that the problem is yes, it hasn’t been it hasn’t been used very aggressively. It’s also evolved to be more effective against mergers and acquisitions, as opposed to organic growth. And we’re talking about organic growth primarily here. It’s also obviously very targeted to, let’s say, traditional forms of market power, where you get a monopoly, you raise price, and you squeeze consumers that we can recognize, but when you give stuff away for free, or when you have other business models of the kind of platform digital economy, it hasn’t been quite, quite quite so quite so effective. You know, I look, I think redress through the courts, when there’s conspiracy is a very good idea. conspiracy to corner markets is generally illegal. But I don’t think that’s what’s going to get us immediate action here. I don’t think that’s what’s going to move the needle, you need to think more in the age of AI in the age of general AI, whether it’s creative problems coming at us from all directions, we need to get more creative other solutions.

Nathaniel E. Baker 14:05
Yeah, fair enough. Yeah. And you talk a little bit here about and you just mentioned that this idea of surveillance and that would actually be one of the major advances if you can call it that, that AI will bring about and you said that one way to address this would be to just basically not allow it not allow corporations to do it or limit them. But would corporations really be beholden to this because they can now without well employment, they can they can limit people’s freedom of speech sometimes because they need to, and other things. So would that really be something that they that these companies would would adhere to, even if they do agree to sign off on it?

Simon Johnson 14:45
What are you going to obviously need you’re going to need some you’re gonna need some enforcement on that. And you’re going to need protection for individual rights in appropriate manner. I think that’s going to be very important. This stuff doesn’t have myself I’m sure you’re right the company We’ll lean very heavily towards more more surveillance as well, local government, by the way, as well, police forces, as well, security agencies and so on. So I think you have to put safeguards in place, and you have to have enforcement of them. The Consumer Financial Protection Bureau, which is not everyone’s favorite, by any means, has proven very effective in protecting consumers against the abuses. Some of the abuses we used to see in the financial sector. So I think that kind of focused consumer if you like, rights, but of course, it’s also worker rights. And the good news here is that both people on the right and people on the left, don’t want unfettered surveillance in this country for slightly different reasons. But actually very related reasons that they’re, you know, extremely concerned about who’s looking over their shoulder and with good reason. And they don’t know who’s going to be in charge of the government next time around. And they fear that that might be used against them. So I think that’s fair. And I think, very strong, clear safeguards on that, and limits on what private companies can impose on people, that’s going to be important. And you’re going to see, I would suggest, in China, very different outcomes. We’re going to publish this book, the manual, about 20 countries, 20 languages around the world, we were approached by including in Taiwan, we were approached by mainland Chinese publisher, whose representative said, well, we’d like to publish it, but this is what we need to cut out. And it was about a third of the book. So our price is only that level. Well, perhaps they were offering us a special deal, I don’t know. But we’re not going to do that, obviously. But that but that’s because anything about social media, and about the communist, the Communist Party, anything about surveillance, all of that stuff would have to go in those proposed cuts, because that stuff is very sensitive, because that’s what they’re doing.

Nathaniel E. Baker 16:33
And you just raise an interesting point. I mean, you guys aren’t necessarily after the money that will come from any proceeds you get from China, but many consumer facing corporations are. And I’ve had this exact dilemma, and have effectively rolled over for the Chinese government to access those consumers. And so that’s kind of a one on one issue where maybe, you know, this that doesn’t speak very well to this, how this whole thing and who might win, right?

Simon Johnson 17:03
Well, you’re certainly right, that many companies have gone out of their way to accommodate the Chinese government wishes primarily, I think, because they thought that down the road, there were future products to be had for the Chinese market, which large, which meant, in many cases turned out to be illusory, by the way. Look, what what a company does in China is to some degree, its business as long as they don’t transfer sensitive technology and so on. But that shouldn’t have any impact on what we allow in the United States. Also, I think it’s very clear. And this is, you know, something that’s come through more than in the debate about the chips and science Act last year, for example, that we still have a technological edge over China in key aspects of technology, like semiconductors that matter a lot for the developer of AI. And we are not calling for a slowdown in that innovation. By the way, we want innovation to go more towards helping people machine usefulness rather than machine intelligence. I don’t think stopping innovation generally ever works. And and if you tried to stop it, people will just say, Well, China is going to do it, and then you’re actually gonna get more support from her government, not less. So I think, let’s continue with innovation. But let’s have let’s have more plurality of business models, let’s have more competition, at every level of the supply chain for artificial intelligence, and let’s do what we’re good at, which is generating ideas that add value. And let’s constrain and restrict what has been, you know, a bit of a weak point in this society for a long time, which is tends into tendency to slip into unguarded surveillance.

Nathaniel E. Baker 18:29
Do you think, Is there is there no, you know, kind of market solution to this just to put on a purely capitalist hand here for a minute, you know, you and you have, if you look at social causes, for example, many private corporations, for whatever reason, have pushed have been at the forefront of pushing for some of these? And is, could it not be that there will be some new type of company that comes along with AI? And does things the right way? Is? I mean, there’s not entire I mean, the whole idea of profit, and, you know, good doesn’t have to be entirely divorced, does it?

Simon Johnson 19:07
Wow, look, I think we’re in the midst of an episode that can be summarized by Milton Friedman’s saying one of his favorite sayings, the business of Business is business. Yeah. Right. So could we have plurality of business models? Could we have alternative ways of organizing ourselves? Could a AI empower that? Yeah, sure. Absolutely. Is that what we’re seeing right now? Is that what the sort of central driving forces? No, no, it’s profit seeking. It’s standard silicon valley type thinking. And I think we should be realistic about that. And we should deal with that on its own terms. And the key to that would be preventing them from being a monopoly of business models and a monopoly of mental models, actually, I would say. So I think the part of the danger of the, what we’re getting from GPT and what we’re seeing coming out from from Google, is that they’re tending to a certain way of organizing the data and presenting the data and emphasizing this. Machine Intelligence Place the humans aspect. And that’s where you’re gonna see a lot of the businesses developing. Following that, let’s push it in a different direction. And let’s have the argument and let’s, you know, we go see these people in person when they see us, let’s have these conversations in private, let’s have the discussions in public and see if we can move the business models towards a plurality of thinking about what’s appropriate, and so on. And let’s, you know, put the safeguards in in terms of a very high corporate tax rate on Mega profits, because those will be windfall profits resulting from massive market power. That’s in part power over our brains.

Nathaniel E. Baker 20:32
Yeah, yeah, that’s a scary thing here. One of the many scary things that we’ve talked about. Do you have any idea? Have you seen in your travels in your work any companies that might be examples of doing things the right way that you talked about? Or is it all just like really following the Silicon Valley, venture capital type of thing? Well, it’s

Simon Johnson 20:53
very, it’s very early days. Of course, there is a polarity community that attempts to do this. It’s coming out of related to, but perhaps going beyond web three. So those ideas are in the mix. I don’t think they’re particularly salient or well funded or making a huge amount of progress. Certainly not compared to the the GPT line of models, the the large language models, I think that those are dominating the conversation right now. And of course, that’s part of the problem is that when you get the hype, and you get the concentration of attention on a particular way of organizing data and doing business, that’s where everybody attends to get pulled.

Nathaniel E. Baker 21:31
Okay, interesting. All right, cool. Simon Johnson, I want to come back and ask you some more questions, ask you about your background, what you’ve done prior to writing this book, and talk some more about it. But let’s first take a short break. If you’re a premium subscriber, you will not get the break, don’t touch the pile. We’ll be right back. In fact, we already are, are welcome back here with Simon Johnson. I believe you are at MIT. This is the segment of the show where we ask our guests more about himself in this instance, and what he did in his Well, I’ve told now until his in his career, and so very curious to hear. Yeah, what you’ve been doing and how you came to write this book. I know you wrote another one that we want to talk about a little later for different reasons. But yeah, go ahead.

Simon Johnson 22:22
Well, in a nutshell, I’m an economist. I have a PhD from MIT, actually, in the 1980s. And I worked on economic development and financial stability and technology, economics and technology issues for a long time. In the early 2000s, I worked at the IMF and I was a chief economist of the International Monetary Fund 2007 2008. Coming out of that experience, I wrote a couple of books, because of course, we were in the midst of the global financial crisis, one was called 13 bankers about what had actually gone wrong with the financial system. Second book was called White House burning, which was how the financial crisis had affected the fiscal system, because obviously, we are experiencing a series of fiscal problems and fiscal debates in the US, including the current one on the on the debt ceiling, again, and my subsequent book, most recent book, but prior to this one, was called jumpstarting America, which was attempting to look at more sort of how do we accelerate growth? And how do we spread the benefits of growth around the country. And that led me to this book, which is 1000 year backstory of how we get to this generative AI moment what this moment means and how we could get do better in terms of shared prosperity if we study and think about some of the things that went right. And some things that went wrong in in that 1000 years history.

Nathaniel E. Baker 23:36
And before anybody thinks that this is all just kind of fancy talk you did there, the previous book did lead to some actual outcomes in Washington, DC, right? Tell me about that.

Simon Johnson 23:48
Right? So we published jumpstarting America in 2019, and had some some positive response. But limited response, we were proposing a big increase in science spending, and to spread that around the country and ways to actually implement that. And I think what happened then was COVID arrived and covered, of course, was a massive shock to everybody’s thinking about everything, including distance, including science, including how we organize production and society. And coming out of that people were looking for ideas and you know, sort of things to do differently, and maybe a little bit better. And we got bipartisan support for ideas around the science funding, and the science deployment or funding model that we had in our book. And this became part of the what’s known as the chips and science act of 2022. And we’ll see how people deal with that obviously, there’s there’s a lot of work to be done. But at the local and state level there’s there’s tremendous enthusiasm for building scientific and technological capabilities involving a lot of people out so there’s a human capital education piece as well as a science piece. And it seems like there’s there’s funding available from the government as well as enthusiasm from private sector funders on commercialization. So, early days yet sure what it was gratifying to push people in what we regarded as the right direction.

Nathaniel E. Baker 25:04
Very cool. Very interesting. So I’m curious, like, as somebody who’s who you, obviously, were very involved with the 2008 financial crisis, and I’ve studied, you know, human innovation, the history of it. So I’m curious where were you see our current setup, if you will? With? Yeah, I mean, we have this new new forms of innovation. On the one hand, we just had a kind of a mini banking crisis that may or may not be completely over yet. And economic cycle seems to be winding down now after COVID. And there’s some talk, there’s always talk about there being a new crisis. But where do you view things and that whole thing?

Simon Johnson 25:45
Well, I think the crisis point, crises are to be avoided, because they get in the way of everything, right, they mess up everything they they, they hurt a lot of people. And I think we did well, to avoid the problems, broader problems coming out of Silicon Valley Bank and Signature Bank, perhaps they could have been done a little bit better. But you know, it’s a crisis, things are always a scramble. So we’ll see what happens as the Fed, decide managers to decelerate inflation enough, we’ll see if it’s a soft landing, or, or a harder landing. But the bigger issue, of course, if you look at how countries do over a period of decades, is what happens to productivity, what happens to the underlying growth, taking out the cycle, and what happens to in terms of development and application of innovations. And, you know, we have this fantastic machinery for innovation in the US. But we’ve become terribly fixated on a few digital ideas that have been, you know, they’ve been high impact, but they’ve not led to shared prosperity, they’ve contributed to a widening of income inequality. So isn’t there a way we can take that same innovativeness and do what we did in the 40s 50s and 60s, which is create a model with much more shared prosperity. We have a new initiative at MIT minister for the study of the future of work that is trying to develop exactly actionable items along those lines, very private sector oriented, very much about getting the policy is right, but also about constructing a narrative and helping people understand what’s the economics of this, why does this make sense? How is this good for the country as a whole? And, you know, I think the good news is a lot of people want to have this conversation, particularly after 40 years that have been a bit disappointing in terms of shared prosperity.

Nathaniel E. Baker 27:26
Yeah, what do you make of the argument that you know, what we saw in the 50s? And, you know, largely due to America, being able to export stuff, and now we’re, you know, we become a more of an importer, and also all the offshoring that happened. And that’s kind of a genie you can put back in the bottle anymore. Do you think about that at all?

Simon Johnson 27:45
Look, I think manufacturing can well cup can come back in the US, we can have increase in manufacturing output, I don’t think manufacturing employment will necessarily go up a lot. So that’s the, you know, jobs went offshore and a lot coming back idea. However, I think we can create a lot of jobs, a lot of jobs around science and technology innovation, for example, we can absolutely self rule. I mean, with 320 million people in a world of 8 billion, that’s going to 10 or 11, or some people say 12 billion. Those people have a lot of problems. We have the deep, innovative capacity, we’re much more able to invent things, and deploy technologies than anyone else I’ve met around around the world. And so I think you have to ask the question, what’s the problem that we’re trying to solve? What are the solutions that we’re proposing? How much are we charging for that? I think America is extremely competitive. If you see it in those terms, we do tend to trip ourselves up, we do tend to get very self absorbed and have these crazy fights among ourselves. No one else can even understand what we’re talking about. But once we get once we get out of that, once you go beyond that us is a tremendous positive force for the world, for this century. And as far as I can see beyond that, so as long as we can hit that and tap into that, and not ruin ourselves, inadvertently, I think we’re going to do well.

Nathaniel E. Baker 28:57
I wonder if you have any thoughts on the US dollar being the reserve currency? And if that’s going to be? Because it’s another thing that’s very topical in the news right now, admittedly, outside the scope of your book, but yeah, well,

Simon Johnson 29:07
that was the that was the very much dealt with in my three books ago, White House burning. So the US has reserve currency is obviously a massive advantage that’s been conferred on us, the world lends us the capital that we use, we borrow if you liked for finance, consumption, but also for finance, all of our investment. And as long as they continue to do that, it’s a it’s an amazing deal. And why would you rock the boat? The question is, how long will that continue? And I think it’s unwise to assume that will last forever. Although despite the disruptions of the past, let’s say 20 years, including the rise of China, the global financial crisis COVID. And now what are we think’s going to happen, right AI, that the dollar is still very attractive or more attractive than the alternatives, which is what it takes,

Nathaniel E. Baker 29:52
right? Yeah, fair enough. Yeah. Okay. So do you think like here putting on your political science hat? I mean, some of the stuff that you speak of and I mean, it is, it does sound pretty alarming. And if you look at the collapse of societies, I mean, you know, it looks like a type of some of the stuff has happened before and not a very good effect. Do you think we’re at risk of that in the US? I mean, I’m assuming Brits are always at some risk of it. But how realistic? Is it to expect any type of actual political upheaval in the US at any point? Do you think about that at all?

Simon Johnson 30:29
So I’m an immigrant to the United States, I chose to become an American. And I really liked this country. And I like his political system, which is not a fashionable thing to say. What I particularly like is we have a lot of open competition for ideas. And there’s a lot of people who are looking for better solutions. There’s a lot of pragmatic, pragmatic politicians. Governor, Charlie Baker, former governor of Massachusetts, was at MIT a couple of weeks ago, and we will talk he has a book, talking about how to get things done at the state level. And you know, I had 150, mid career embase, and their bosses in the room have all kinds of political persuasions, and they all love what Baker was saying. And they said, Yes, we need more of this kind of practical problem solving approach. So I think the streak of pragmatism runs very deep in American Studies, I do think that polarization is a problem. And I do think that polarized labor market outcomes, some people do well, a lot of people do worse and worse and worse. That is absolutely a problem in terms of making people angry in terms of making them frustrated and making them not believing in the political system. So without question, we we’ve got some pretty big tensions in the US, I think we can find a way out, I think we can find better jobs for more people. And that will help it massively on all the other issues that we fight about.

Nathaniel E. Baker 31:38
Yeah, well, that’s so optimistic. You’re sounding American. But yeah. So yeah, with that, I will put the link in the show notes for how you can pick up a copy of the book, on your digital device or physically. And thank you, Simon, for coming on and talking with us. Thank you all for listening. And with that, we look forward to speaking to you again next time. Actually, wait, I forgot to ask you one thing, which is how people can get in touch with you. Whether you have a social media presence. I did. I did follow you on Twitter. So yes,

Simon Johnson 32:09
yes, yes, yes. On Twitter. It’s called @baselinescene. And we absolutely welcome people to follow us there and tweet comments reactions to the book, we will be building a social media conversation, not sure how much longer social media or Twitter will last. But you know, that I think I think that part of the social media is good, letting people engage on serious ideas seriously.

Nathaniel E. Baker 32:33
Look forward to that. Very cool. All right. Yeah. Check that out. And with that, again, well, thank you for listening and look forward to speaking to you again next week. See you then. Bye.

Leave a Reply