Tech sector needs to face up to responsibility and embrace regulation – GeekWire


https://www.geekwire.com/
Microsoft President Brad Smith called on the tech industry to “step up and do more to address the challenges” that technology has created, speaking in an interview for the GeekWire Podcast about his new book, Tools and Weapons, written with Carol Ann Browne. (GeekWire Photo / Todd Bishop)

Microsoft President Brad Smith has a light-up globe, a neon blue miniature likeness of Earth, spinning on his desk inside the company’s Redmond headquarters. When we sat down with him this week, he noticed something wrong with it.

“This always makes me nervous when the globe is turning the wrong direction. Like, ‘There’s tsumanis coming!’ ” he said, reaching out to spin the blue sphere the other way. “There we go. … It’s been fixed.”

It won’t be so easy to get the tech industry and the rest of the real world spinning in the same direction. That much is clear from Smith’s new book. But he says it’s imperative to get the process in motion now.

“The tech sector needs to step up and do more to address the challenges that technology is creating,” he says. “As it does that, it needs to recognize that in part it requires companies working more closely with governments. It even requires embracing to a degree what I would call a smart approach to regulation so that technology is, among other things, governed by law.”

In “Tools & Weapons: The Promise and the Peril of the Digital Age,” the longtime Microsoft executive and his co-author Carol Ann Browne tell the inside story of some of the biggest developments in tech and the world over the past decade — including Microsoft’s reaction to the Snowden revelations, its battle with Russian hackers in the leadup to the 2016 elections and its role in the ongoing debate privacy and facial recognition technology.

The book goes behind-the-scenes at the Obama and Trump White Houses, explores the implications of the coming wave of artificial intelligence, and calls on tech giants and governments to step up and prepare for the ethical, legal and societal challenges of powerful new forms of technology yet to come.

We touched on many of those topics in this conversation with Smith about the new book on this episode of the GeekWire Podcast, and we’ll continue the conversation with Smith at the upcoming GeekWire Summit.

Listen above, subscribe in your favorite podcast app, and continue reading for an edited transcript.

Todd Bishop: One thing that will surprise people out of the gate is the perch from which you’re able to view not just the tech industry, not just the world, but really humanity, some of the biggest issues facing humanity. I think people might not expect that the president of Microsoft would have this view into these issues that matter so much to everything beyond technology. That’s one of my key takeaways from the book. Why this book and why now?

Brad Smith: I think to some degree, Todd, you’ve hit part of the nail on the head. The whole reason we have the perch that we do is that technology is sweeping the world, it’s changing the world. Carol Ann Browne, who’s my co-author, works with me on all my external relations. That means we travel the world. As we do and as we meet with leaders around the world, what we really see is the incredible impact that technology is having in many good ways but also in challenging ways as well. What motivated us to write the book in part was a real sense of urgency that we felt to try to make all of these developments and these issues — from privacy and security to the impact of AI on jobs — more accessible to people and to make the case for the kinds of changes that we think are needed.

TB: If you were to explain the central thesis of the book, what would it be?

Brad Smith: I think our key argument is that the tech sector needs to step up and do more to address the challenges that technology is creating. As it does that, it needs to recognize that in part it requires companies working more closely with governments. It even requires embracing to a degree what I would call a smart approach to regulation so that technology is, among other things, governed by law.

TB: One of the things that’s extremely striking in the book, and you don’t necessarily state it explicitly, but government in many countries is headed in one direction, in the direction of nationalism. The tech industry and the economy in many cases are headed in a completely opposite direction, globalism. Where is this going to end up? Because it doesn’t feel like these two things are compatible.

Brad Smith: Well, one of the points that we in fact make in the book is the one that you just referred to. In many ways it feels like the democracies of the world in particular are more challenged than at any point than say the 1930s. Countries are pulling inwards. There is a rise in nationalism. All of this is at a time when macro-economically the world has been doing well, so we should all be concerned about what will happen politically if there are recessions around the world. Part of our point is that when you think about the issues that we all spend time talking about — whether it’s trade, or immigration, or income inequality, or globalization — to some degree all of these are phenomena that are happening in part because of technology. It is the technology, as you say, that’s driving the globalization, and then that is unleashing these responses to it when people see the concerns. But we don’t necessarily talk enough or think enough about the role that technology is playing, hence our effort to do so.

https://www.geekwire.com/

TB: The other point that you make is that not only do tech companies need to do more to work together to collaborate with government, but government needs to think differently. One of the phrases that stuck with me from the book was minimum viable regulation. People in the tech world will be familiar with MVPs, minimum viable products, minimum viable products. How does that translate into the world of government law and regulation?

Brad Smith: It will be fascinating to see how people in government circles react to this concept. As you point out, as you know, oftentimes in the world of software, people focus not on building the most complete complex, full-featured product, but a minimum viable product first. They get feedback, and then they use that feedback to make the product better. We actually believe that that has something to offer for the world of technology regulation.

We, for example, in the book talk about how it can be used to address the problem of bias and discrimination when it comes to facial recognition software. It’s really difficult today to say that any of us knows exactly what a broad facial recognition law will need to look like in five years. But our point is if there were even just a straightforward law that would require companies that want to offer a facial recognition service to make their service available for testing, you would in effect create the Consumer Reports equivalent that would evaluate bias in different services. You would stimulate the market to act in a well-informed way to reward companies that move faster to reduce bias. That is a great example in our minds at least of where government could move faster in a more focused way and then learn. Then as it learns, it can add to a regulation in the future.

TB: Because right now it’s the opposite with government. You’re known in part for showing up at a congressional hearing with the original IBM laptop that was made in the same year that the regulation you were testifying about was made. It is there, though, a risk that government could move too fast because part of the problem that tech companies have gotten into is that they’re moving fast and breaking things, and we don’t want our government to do that.

Brad Smith: There is always that risk. I mean, look, let’s face it, there’s sort of a risk of everything in life. I risk getting hit by a car if I cross the street. I risk never going to the mailbox if I fail to cross the street. That risk is real, and we acknowledge it. What we also say, what we actually conclude in the last chapter, is right now the problem of the day is not that governments are too fast, it’s that they’re too slow. It’s not that governments are doing too much, it’s that they’re doing too little. Therefore, what we in the tech sector and in communities more broadly should do is help figure out how to help governments do more, move faster, be thoughtful, strike the right balance rather than just stay home.

READ ALSO  China pushes Canada to release Meng Wanzhou, Trump refuses to talk Huawei with China

Regulating facial recognition

TB: Microsoft essentially came up with principles and the fundamental underpinnings of legislation for facial recognition. You write in the book that facial recognition is one of the first opportunities for the tech sector and governments to address the ethical issues for AI in a focused and concrete way now. Where does that stand though, because you’ve struggled to get that through.

Brad Smith: Well, it’s interesting. The first thing I would say is it’s very early days for facial recognition law and regulation. We as a company put out the first call, so to speak, for more work in this area, and that was just last year. It was in July of 2018. Then what is also interesting is to see how fast moving the facial recognition issue has been. When we first said that this was a problem that would need more focused effort by governments, there were many people in the tech sector who said to us, “Why are you talking about this? This is not a problem.” Yet now we’ve seen the city of San Francisco, literally next door to Silicon Valley, ban public use of facial recognition in city limits.

Clearly there is something here that is bothering people. We had hoped we might see some legislation passed in Washington state this year, 2019. We did in the Senate, not in the house. One thing I think we can say very safely, this issue is not going away. The ACLU is pushing hard. Many organizations are. We’re going to be debating facial recognition rules probably for the rest of our lives and probably in a very robust way for the next decade.

TB: When you think about AI broadly, that is, you devote I think at least three standalone chapters to AI and various implications of it. It struck me that you’re essentially cautiously optimistic. Maybe that’s even going too far. You really try to strike a balance. Maybe that gets to the fundamental title of the book, Tools and Weapons. AI is the ultimate example of a tool and a weapon. When you’re talking to somebody out on the street and trying to explain how you think about AI, what do you say?

Brad Smith: I think you just put it very well. This is an incredibly powerful, helpful, and important tool. I fully believe that AI will help the world cure cancer. In fact, in one of the later chapters we tell the story of how the Fred Hutch, the cancer research institute here in Seattle, is using machine learning as the new microscope, if you will, to get at patterns that can help cure cancer. At the same time it can be put to uses that would include the creation of a mass surveillance state of the sort imagined by George Orwell in his novel 1984.

That’s why in the book we really try to take the time first to help people learn a little bit more about why AI has developed and how it has developed the broad range of ethical issues it is creating, some of the focused issues like facial recognition that are the first concrete controversies. Then we talk more broadly about the impact of AI on the economy and put that in historical context. We see it as big a transition as the transition from the horse to the combustion engine and the automobile. One of the many things we try to do in this book is help people learn from history so that they can take some of those lessons and use them themselves as they just think about where this is all going.

TB: I think one of the other things that will surprise people if they were just coming into this book cold not really seeing the evolution of Microsoft over the past 10, 15, perhaps 20 years is that you make an explicit call for more regulation of the tech industry, and you call on your counterparts in the tech industry to get on board with this idea. Why is that important?

https://www.geekwire.com/
Brad Smith brought a 30-year-old laptop to the House Judiciary Committee to illustrate the outdated nature of the Electronic Communications Privacy Act. (Image via House Judiciary Committee/@HouseJudiciary)

Brad Smith: Well, first of all, we point out that in fact almost no technology in the history of technology has gone for so long with so little regulation as digital technology. There’s lots of reason to look at that and say, “And hasn’t then that been great? Look at all the innovation.” We would endorse that wholeheartedly, but there is a big but. This technology is impacting every part of our lives, our economies, our societies, our security. I think as Americans, we’re all used to this concept no person is above the law. No government is above the law. No company is above the law. I would say no technology can afford to consistently always be above the law.

We’re not going to solve the concerns that people have about technology or tech companies. We’re not going to be successful in addressing techlash if frankly the entire burden is put on the tech sector alone. We need governments to do more, and that means creating the rules, the laws, the regulations, if you will, in a smart, thoughtful, and balanced way to address the real problems that bother people, whether it’s privacy, or security, or the impact on jobs, for example.

TB: What would you say to those who would say, “Well, it’s easy for Microsoft to say that. You don’t get the majority of your revenue from advertising that relies on personal information, artificial intelligence. Sure, it’s part of Azure, but you’re not relying on that right now for all of your revenue. That’s a convenient position for Microsoft to take. Clearly it’s just a competitive wedge against Facebook, Google, et al.”

Brad Smith: Well, the first thing I would say is in every respect we will be affected as well. I mean, yes, you can argue only part of our business gets affected by rules on advertising, and only part of our business gets affected by rules on facial recognition. Yes, but by the time you add up all of the parts of our business, virtually all of them will be affected in an important way.

The second thing that we point out in the book is that we as a company have lived through a transition from a lack of regulation to rules. It was part of and the result of our antitrust issues beginning in the 1990s. We found that if we adapted, we could flourish. This is not about trying to impose rules that will stop other people from succeeding. I would argue to the contrary. If the tech sector wants to succeed on a sustained and sustainable basis, it will be better served if the public has the confidence and the trust in technology that comes in part from having rules.
Brad Smith: Just think about the fact we all go to the grocery store. We can look at a package and know what’s in it because there are nutrition rules that standardize how every company needs to report what is in its products. Think about the issues around commercial aircraft. Think about the Boeing issues today. The flying public needs rules that give it confidence in the safety of airplanes, and at the end of the day, so do the companies that make the aircraft.

TB: You make the point people might say, “Well, tech is too complicated for Congress or legislators to understand,” but with the airplane example, no one would ever say that about the FAA, at least no one who wanted to fly safely.

Brad Smith: Exactly. I think that is such an interesting comparison. It’s why we talk about it. People all the time in the tech sector say, “Oh, you can’t possibly have regulation. This technology is too complicated.” We point out, “Well, what’s an airplane these days? It’s sort of a computer with wings.” There are real regulatory issues, but nobody says, “Oh, it’s too complicated. Let’s just abolish the FAA and hope that the aircraft makers do their best.” Similarly, think about an automobile. We point out in the book, we tell the story of how by the year 2030 an automobile will consist fully half in terms of value of computers, and data, and the like. That’s a computer with wheels. Nobody says, “Oh, now let’s no longer regulate auto safety because it’s got a lot of computers inside.” I would argue that if governments can figure out how to regulate a computer with wheels and address a computer with wings, it sure as heck can also do a good job of addressing a computer that sits still and sits in a data center.

READ ALSO  1Password Unveils 1Password Advanced Protection

TB: If you were to give two or three concrete rules, regulations that would get folks on the right path, what would they be?

Brad Smith: Well, you see it in the topics that we cover. In part we think there’s just this urgent need to press forward in the cybersecurity area. That has already led to more efforts by companies and governments to work together, but we have a lot more to do to strengthen cybersecurity protection. The next is privacy. We’re clear advocates, including in the book, for a national privacy law that provides strong privacy protection for consumers. We share in the book the stories of how privacy has jumped across the Atlantic in the last couple of years, but how in some ways that is just a beginning. Certainly facial recognition is a third area. We’re very explicit. We tell the story of how we thought this through, what led us to develop some of these proposals at Microsoft. We talk about the people we’ve encountered around the world.

TB: One of the principles that you lay out is that in cases where people are being scanned or having their likeness being processed through facial recognition systems, they need to know.

Brad Smith: Absolutely. We acknowledge in the book that we’re still in early stages, as I was saying before. The first step is to let people know when facial recognition is being used in public places. We also acknowledge in the chapter on facial recognition that we’re going to need to have more protections on top of that, especially protections that only then share people’s data with their consent in some appropriate way. There’s a lot that we’re going to need to think through, but we have to start in some way I think by letting people know because once you let people know, then you start public conversation. Until people know, it just is invisible, and invisible is not good in this instance.

Microsoft vs. Russian hackers

TB: I loved the behind the scenes stories of ongoing things, of news events that were happening over the last few years, and hearing them from your perspective what was going on behind the scenes, everything from WannaCry to the Snowden revelations. There were lots of new insights there. The thing that really struck me was Microsoft’s efforts to thwart the Russian hackers that were seeking to influence the US elections through a combination of technical and legal tactics that you used. There was one sentence that just made my jaw drop. You said, “It was soon apparent that the Russians were innovating as quickly as we were.” I think about that, and granted, Russian hackers, obviously they’re innovative, but you’re Microsoft. It’s stunning because I tend to think of the risk factors in the Microsoft 10-K — Google, Apple, Amazon. You wouldn’t list Russia among your risk factors. What does that say about the state of the world that Russia is effectively a competitor in terms of innovation in that way with Microsoft?

Brad Smith: Well, it in part says that there’s some very smart people working in places like Saint Petersburg and Moscow.

TB: On behalf of the Russian government. I think that’s the twist.

Brad Smith: That’s fair. It’s true. It’s what we say, and it’s what we see. This is a threat to our democracy more even than to any company or to the tech sector, which is why we have a few chapters devoted to this range of issues that has arisen around cyber hacking. We share the story of what we saw just the same week as the Democratic National Convention in 2016 and how we responded to that. We put it in an even broader historical context that one of the attributes of a democracy is that it is potentially subject to foreign interference. This was actually true going back to the very first years literally of the American Republic. It was something that George Washington had to deal with as president of the United States.

The point that we also make is that over the last several decades, for the most part, people in democracies tended to see communications technology as something that was on the side of democracy. The United States, other countries used it in some ways in a manner that some would now find objectionable, but very much used it to beam information into Eastern Europe, for example, and educate people. Now the shoe is on the other foot. Technology is being used to disrupt democracy, to weaponize the email of political candidates to spread this information and to literally encourage neighbors to protest against their neighbors as we saw in 2016, egged on by people in Saint Petersburg. It’s extraordinary. We have to start by just understanding what’s going on, I think put it in that historical context. Then we do share the journey, the struggle in some ways that we and everybody in the tech sector have been on to start to figure out how to respond, and also the struggle of trying to get government on board because too often people are just arguing with each other rather than really working in a united way to defend the country.

TB: I want to be clear, I’m not questioning the technical skills of people from Russia. It’s just Microsoft versus the Russian government is what struck me, to be clear.

Brad Smith: It’s a fair point, Todd. Companies are not used to criticizing governments. We’re used to defending ourselves from criticism by governments perhaps more so than the other way around. Yet when you have multiple governments around the world attacking your customers, life changes. Part of what we describe is what it meant to have to come to terms with this change and what it meant inside a company like Microsoft.

https://www.geekwire.com/
Microsoft executives Brad Smith, Satya Nadella and Amy Hood take questions from shareholders. (GeekWire Photo / Nat Levy)

TB: There are a couple moments in the book where you tell the inside story of the senior leadership team meetings with Satya Nadella, Microsoft CEO, and Satya stops and makes a decision or an observation that changes the course of the discussion of the company’s policy and sometimes even big picture political issues. What have you learned from Satya over the last five years of his tenure as CEO?

Brad Smith: One of the really interesting aspects of Satya which we talk about in the book is the fact that he’s not only an engineer that people see all the time. He grew up as the son of a very important, highly respected government official in India in really the first generation after India became a free nation and no longer a colony. As we say, I think that gives Satya this almost intuitive feel for how governments tend to work. Now, part of what that has caused him to bring to our work here at Microsoft, which I think has been tremendously beneficial, is a commitment to being principled. He often says, “Look, let’s create a set of principles. Let’s get the principles right. Let’s be transparent publicly with what they are, and then we’ll move the company so that we’re then making decisions based on these principles.”

I think that in so many ways has made it possible for us at Microsoft to be more decisive, to lean in to address these issues. It gives us an ability to be consistent over time. Consistency then in turn is fundamental to winning people’s confidence and sustaining their trust because we become predictable.





Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com