Quantum computing took a giant leap forward on the world stage today
as NASA and Google, in partnership with a consortium of universities,
launched an initiative to investigate how the technology might lead to
breakthroughs in artificial intelligence.
The new Quantum Artificial Intelligence Lab will employ what may be
the most advanced commercially available quantum computer, the D-Wave Two, which a recent study confirmed was much faster than conventional machines at defeating specific problems (see “D-Wave’s Quantum Computer Goes to the Races, Wins”).
The machine will be installed at the NASA Advanced Supercomputing
Facility at the Ames Research Center in Silicon Valley and is expected
to be available for government, industrial, and university research
later this year.
Google believes quantum computing might help it improve its Web
search and speech recognition technology. University researchers might
use it to devise better models of disease and climate, among many other
possibilities. As for NASA,
“computers play a much bigger role within NASA missions than most
people realize,” says quantum computing expert Colin Williams, director
of business development and strategic partnerships at D-Wave. “Examples
today include using supercomputers to model space weather, simulate
planetary atmospheres, explore magnetohydrodynamics, mimic galactic
collisions, simulate hypersonic vehicles, and analyze large amounts of
mission data.”
Quantum computers exploit the bizarre quantum-mechanical properties
of atoms and other building blocks of the cosmos. At itse very smallest
scale, the universe becomes a fuzzy, surreal place—objects can seemingly
exist in more than one place at once or spin in opposite directions at
the same time.
While regular computers symbolize data in bits, 1s and 0s
expressed by flicking tiny switch-like transistors on or off, quantum
computers use quantum bits, or qubits, that can essentially be both on
and off, enabling them to carry out two or more calculations
simultaneously. In principle, quantum computers could prove
extraordinarily much faster than normal computers for certain problems
because they can run through every possible combination at once. In
fact, a quantum computer with 300 qubits could run more calculations in
an instant than there are atoms in the universe. D-Wave, which bills itself as the first commercial quantum computer company, has backers that include Amazon.com founder Jeff Bezos and the CIA’s investment arm In-Q-Tel (see “The CIA and Jeff Bezos Bet on Quantum Computing”). It sold its first quantum computing system, the 128-qubit D-Wave One, to the military contractor Lockheed Martin in 2011.
Earlier this year it upgraded that machine to a 512-qubit D-Wave
Two—reputedly for about $15 million, which might be roughly what the new
Quantum Artificial Intelligence Lab paid for its device.
The collaboration between NASA, Google, and the Universities Space Research Association (USRA)
aims to use its computer to advance machine learning, a branch of
artificial intelligence devoted to developing computers that can improve
with experience. Machine learning is a matter of optimizing behavior
that may be easier for quantum computers than conventional machines.
For instance, imagine trying to find the lowest point on a surface
covered in hills and valleys. A classical computer might start at a
random spot on the surface and look around for a lower spot to explore
until it cannot walk downhill anymore. This approach can often get stuck
in a local minimum, a valley that is not actually the very lowest point
on the surface. On the other hand, quantum computing could make it
possible to tunnel through a ridge to see if there is a lower valley
beyond it.
“Looks like win-win-win to me—Google, NASA, and USRA bring unique
skills and an interest in novel applications to the field,” says Seth Lloyd,
a quantum-mechanical engineer at MIT. “In my opinion, the focus on
factoring and code-breaking for quantum computers has overemphasized the
quest for constructing a large-scale quantum computer, while slighting
other potentially more useful and equally interesting applications.
Quantum machine learning is an example of a smaller-scale application of
quantum computing.”
Over the years, many critics have questioned whether D-Wave’s
machines are actually quantum computers and whether they are any more
powerful than conventional machines. The standard approach toward
operating quantum computers, called the gate model, involves arranging
qubits in circuits and making them interact with each other in a fixed
sequence. In contrast, D-Wave starts off with a set of noninteracting
qubits—a collection of supercomputing loops kept at their lowest energy
state, called the ground state—and then slowly, or “adiabatically,”
transforms this system into a set of qubits whose interactions at its
ground state represent the correct answer for the specific problem the
researchers programmed it to solve.
Many scientists have wondered whether the approach D-Wave used was
vulnerable to disturbances that might keep qubits from working properly.
But independent researchers recently found that D-Wave’s computers can
actually solve certain problems up to 3,600 times faster than classical
computers. Before choosing the D-Wave Two, NASA, Google, and USRA ran
the computer past a series of benchmark and acceptance tests. It passed,
in some cases by a giant margin.
USRA will invite researchers across the United States to use the
machine. Twenty percent of its computing time will be open to the
university community at no cost through a competitive selection process,
while the rest of it will be split evenly between NASA and Google.
“We’ll be having some of the best and brightest minds in the country
working on applications that run on the D-Wave hardware,” Williams says.
Google has set plenty of restrictions on the functionality of apps
for Glass, the head-mounted display it is now shipping out to early
adopters. At the company’s annual developer conference, I/O, which kicks
off today, it will show app creators how to break those rules.
One conference session will be called “Voiding Your Warranty: Hacking Glass.”
But it could be controversial to encourage experimentation with a
product that at once has wowed people with its possibilities and spurred
uneasy imaginings of a society subject to ubiquitous, user-generated
surveillance. Google clearly wants developers to help explore the limits
of what Glass can do, and yet Glass is not even on the market yet, and a
handful of bars and cafés have already banned the hardware.
“Google really, really loves this project. But they are terrified,” says Chris Maddern,
among a limited number of software developers who got to buy a $1,500
model through Google’s “Explorers” program. “There are so many things
that can go wrong between now and when it’s in consumer hands.”
In this context, Maddern says, he understands Google’s relatively
restrictive API, the gateway through which developers’ Web-based apps,
or “Glassware,” can interact with Glass’s modified Android operating
system—restrictions that have, at least officially, put on hold what he
sees as “almost every really cool application for wearable computing.”
For example, the API doesn’t allow developers to analyze a person’s
location, videos, or photos in real time, so no apps that recognize the
face of someone chatting with a Glass wearer; no augmented-reality-style apps that suggest dinner spots during a stroll.
These are possibilities, however, and Google is clearly encouraging
developers to experiment with such “hacks,” as they are called for now.
These hacks could influence the final shape that Glass takes. No one
knows what the platform will look like or how much it will cost if Glass
hits the market next year as planned (there is not even an official
Glassware portal or store yet, though Maddern started an unofficial one, and so have others).
When the first independent developer recently found a way to
jailbreak the device to run custom applications, causing a hubbub,
Google staff shot back on Twitter: “Yes, Glass is hackable. Duh.” Already, one developer has used facial-recognition
technology with Glass to build an app for doctors that calls up a
patient’s files. Another allows wearers to take a picture with a wink,
making photography less obvious than the Google setup of having wearers
speak to the device.
Most early apps created by Google itself are more prosaic, extending sharing capabilities seen on smartphones. Glassagram, for instance, uploads photos with filters. Beam makes it possible to share YouTube videos. Glass Tweet helps users, well, tweet.
Google has also worked directly with a handful larger developers, including the mobile-only social network Path and the New York Times,
which created an app that displays the latest breaking news headlines
to Glass wearers inside the device’s small head-mounted display. The
popular life-organizing app Evernote is working on its own software for
Glass but won’t say more about how it will work. So is Twitter. Facebook
CEO Mark Zuckerberg has been impressed with the hardware, and any app
Facebook launches may well be a hit (see “Facebook Will Make the Most Popular App for Google Glass”). More apps are expected to surface during and after Google’s conference this week.
For consumer-oriented developers, one big question is whether Glass
will create entirely new businesses in the same way that the opening of
Apple’s app store launched a multibillion-dollar app economy. For now,
Google’s terms of service don’t allow developers to charge for apps or
show advertisements, but that probably won’t be true forever. And few
developers yet have access to the hardware—Maddern says he gets
“ludicrous offers” every day to work with large consumer companies that
want to get their hands on it.
If Glassware does become a business, Google’s own venture capital
division, Google Ventures, is likely to benefit. In April, in an unusual
arrangement, it joined with two other top-tier venture firms, Andreessen Horowitz and Kleiner Perkins Caufield & Byers, to launch the Glass Collective, an investment syndicate that is sharing opportunities to provide seed funding to Glass startups.
So far, the Glass Collective has gotten “many dozens” of pitches, and
the three firms meet once a week to discuss them together, says Google
Ventures managing partner Bill Maris.
The first funding announcements should come soon, he says, though each
firm will make its own decisions, and the syndicate doesn’t have a
dedicated fund. Most pitches so far relate to the obvious features of
Glass—unique ways to send messages, share photos or video, or tap into a
person’s location, he says. Others are specific to industry sectors and
could target smaller markets, such as medicine (see “Will Anyone Build a Killer App for Google Glass?”).
Much depends, of course, on the success of Google Glass itself, and
that is far from a given. One crucial factor is its price—many feel it
should cost no more than a high-end smartphone accessory. The other main
question is whether Google succeeds at making Glass cool, or at least
socially acceptable.
“Will Glass be a platform? It’s really, really hard to create a
platform,” Maris says. “It takes a lot of money, dedication,
distribution, and acceptance from consumers. The important question is,
will the device get wide distribution? If the answer to that is yes,
business models will come.”
沒有留言:
張貼留言