Skip to main content

Voices in AI – Episode 99 – A Conversation with Patrick Surry

[voices_in_ai_byline]

About this Episode

On this Episode of Voices in AI Bryon speaks with Patrick Surry of Hopper on the nature of intelligence and the path that our relationship with AI is taking.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Patrick Surry. He is the Chief Data Scientist at Hopper. He holds a PhD in math and statistics from the University of Edinburgh. Welcome to the show, Patrick.

Patrick Surry: It’s great to be here.

I like to start our journey off with the same question for most guests, which is: What is artificial intelligence? Specifically, why is it artificial?

That’s a really interesting question. I think there’s a bunch of different takes you get from different people about that. I guess the way I think about it [is] in a pragmatic sense of trying to get computers to mimic the way that humans think about problems that are not necessarily easily broken down into a series of methodical steps to solve.

It’s getting computers to think like humans, or is it getting computers to solve problems that only humans used to be able to solve?

I think for me the way that AI started was this whole idea of trying to understand how we could mimic human thought processes, so thinking about playing chess, as an example. We were trying to understand – it was hard to write down how a human played chess, but we wanted to make a machine that could mimic that human ability. Interestingly enough, as we build these machines, we often come up with different ways of solving the problem that are nothing like the way a human actually solves the problem.

Isn’t that kind of almost the norm in a way? Taking something pretty simple, why is it that you can train a human with a sample size of one? “This is an alien. Find this alien in these photos.” Even if the alien is upside down or half obscured or under water, we’re like “there, there, and there.” Why can’t computers do that?

I think computers are getting better at those kinds of problems. I think humans have a whole set of not greatly understood pattern matching abilities that we’ve actually trained and evolved over thousands of years and trained since we were born as individuals that limit the kinds of problems in the way that we solve problems, but do it in a really interesting way that allows us to solve the kind of practical problems that we’re actually interested in as a species, to be able to survive and eat and find a mate and those kinds of things.

You know, it’s interesting because you’re right. It took us a long time, but it shouldn’t take computers nearly that long. They’re moving at the speed of light, right? If it takes a toddler five years, won’t we eventually be able to train a blank slate of a computer in five minutes?

Yes. I think you’re starting to see evidence of that now, right? I think we sort of started from a different place with computers. We started with this very predictable step-by-step binary system. We could show mathematically you could solve any kind of well-formulated mathematical problem. Then we decided [with] this universal computing device, it would be cool if we could make it solve the kinds of problems that humans solve. It’s almost like we started from the wrong place, in a sense. If you were trying to mimic humans, maybe we should have gone a lot farther down the analog computing path instead of trying to build everything on top of this binary computer, which it doesn’t really match the underlying hardware of a human very well.

We’re massively parallel, and computers just sequentially are enormously fast.

Also, this sort of digital versus analog thing is always interesting. The way human brains seem to work is with lots of gradients of electricity and chemicals and that is very different from the fundamental unit of a computer, which is this 0 or 1 bit. I think when you look at a lot of the recent work that’s being done in computer vision and these generative networks and so forth, the starting point is first of all to construct something that looks a lot more analog and a lot more like things that you find in someone’s brain out of these fundamental units that we originally built in the computer.

You know, records, LPs, they’re analog. CDs came along and they’re digital. Do you think people can tell the difference between the two when they listen to them?

I certainly cannot.

I can’t either. Yet, I think maybe it’s my own shortcoming. I don’t know. That’s not an approximation of an analog experience. It’s beyond an approximation to me at least.

There are people I know who claim that they can tell the differences. I think it’s with a lot of things. We’ve got to a point where you have a really high fidelity of approximation that you can’t really tell is different. You look back at the early days of television or the first computer monitor that I think I had way back in the day with my Apple IIe or whatever it was, there were four colors. You could individually see every box on the screen as a little pixel.

Now you have an 8K TV. If you’re not within an inch of the screen, it looks like a completely continuous picture. It’s sort of to that thing. I think with the CD, once you get to a certain level of digital approximation, it may not be the most efficient, but you can trick most of the people.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.



from Gigaom https://ift.tt/31Z9K1C

Comments

Popular posts from this blog

How To Play Doom – And More – On An NES

Doom was a breakthrough game for its time, and became so popular that now it’s essentially the “Banana For Scale” of hardware hacking. Doom has been ported to countless devices, most of which have enough processing ability to run the game natively. Recently, this lineup of Doom-compatible devices expanded to include the NES even though the system definitely doesn’t have enough capability to run it without special help. And if you want your own Doom NES cartridge, this video will show you how to build it . We featured the original build from [TheRasteri] a while back which goes into details about how it’s possible to run such a resource-intensive game on a comparatively weak system. You just have to enter the cheat code “RASPI”. After all the heavy lifting is done, it’s time to put it into a realistic-looking cartridge. To get everything to fit in the donor cartridge, first the ICs in the cartridge were removed (except the lockout IC) and replaced with custom ROM chips. Some modifica...

Try NopSCADlib for your Next OpenSCAD Project

Most readers of this site are familiar by now with the OpenSCAD 3D modeling software, where you can write code to create 3D models. You may have even used OpenSCAD to output some STL files for your 3D printer. But for years now, [nophead] has been pushing OpenSCAD further than most, creating some complex utility and parts libraries to help with modeling, and a suite of Python scripts that generate printable STLs, laser-ready DXFs, bills of material, and human-readable assembly instructions complete with PNG imagery of exploded-view sub-assemblies. Recently [nophead] tidied all of this OpenSCAD infrastructure up and released it on GitHub as NopSCADlib . You can find out more by browsing through the example projects and README file in the repository, and by reading the announcement blog post on the HydraRaptor blog . Some functionality highlights include: a large parts library full of motors, buttons, smooth rod, et cetera many utility functions to help with chamfers, fillets, precis...

The Newbie’s Guide To JTAG

Do you even snarf? If not, it might be because you haven’t mastered the basics of JTAG and learned how to dump, or snarf, the firmware of an embedded device. This JTAG primer will get you up to snuff on snarfing, and help you build your reverse engineering skills. Whatever your motivation for diving into reverse engineering devices with microcontrollers, JTAG skills are a must, and [Sergio Prado]’s guide will get you going. He starts with a description and brief history of the Joint Test Action Group interface, from its humble beginnings as a PCB testing standard to the de facto standard for testing, debugging, and flashing firmware onto devices. He covers how to locate the JTAG pads – even when they’ve been purposely obfuscated – including the use of brute-force tools like the JTAGulator . Once you’ve got a connection, his tutorial helps you find the firmware in flash memory and snarf it up to a file for inspection, modification, or whatever else you have planned. We always apprec...