Search This Blog

Wednesday, 4 December 2013

Key Processes of Photosynthesis Simulated On Quantum Level

Key Processes of Photosynthesis Simulated On Quantum Level

An artificial quantum system, physicists at Heidelberg University have simulated key processes of photosynthesis on a quantum level with high spatial and temporal resolution. In their experiment with Rydberg atoms the team of Prof. Dr. Matthias Weidemüller and Dr. Shannon Whitlock discovered new properties of energy transport. This work is an important step towards answering the question of how quantum physics can contribute to the efficiency of energy conversion in synthetic systems, for example in photovoltaics.

The new discoveries, which were made at the Center for Quantum Dynamics and the Institute for Physics of Heidelberg University, have now been published in the journal Science.
In their research, Prof. Weidemüller and his team begin with the question of how the energy of light can be efficiently collected and converted elsewhere into a different form, e.g. into chemical or electric energy. Nature has found an especially efficient way to accomplish this in photosynthesis. Light energy is initially absorbed in light-harvesting complexes -- an array of membrane proteins -- and then transported to a molecular reaction centre by means of structures called nanoantennae; in the reaction centre the light is subsequently transformed into chemical energy. "This process is nearly 100 percent efficient. Despite intensive research we're still at a loss to understand which mechanisms are responsible for this surprisingly high efficiency," says Prof. Weidemüller. Based on the latest research, scientists assume that quantum effects like entanglement, where spatially separated objects influence one another, play an important role.
In their experiments the researchers used a gas of atoms that was cooled down to a temperature near absolute zero. Some of the atoms were excited with laser light to high electric states. The excited electron of these "atomic giants," which are called Rydberg atoms, is separated by macroscopic distances of almost a hair's breadth from the atomic nucleus. Therefore these atoms present an ideal system to study phenomena at the transition between the macroscopic, classical world and the microscopic quantum realm. Similar to the light-harvesting complexes of photosynthesis, energy is transported from Rydberg atom to Rydberg atom, with each atom transmitting its energy packages to surrounding atoms, similar to a radio transmitter.
"To be able to observe the energy transport we first had to find a way to image the Rydberg atoms. At the time it was impossible to detect these atoms using a microscope," explains Georg Günter, a doctoral student in Prof. Weidemüller's team. A trick from quantum optics ensured that up to 50 atoms within a characteristic radius around a Rydberg atom were able to absorb laser light. In this way each Rydberg atom creates a tiny shadow in the microscope image, allowing the scientists to measure the positions of the Rydberg atoms.
The fact that this technique would also facilitate the observation of energy transport came as a surprise, as PhD student Hanna Schempp emphasises. However, the investigations with the "atomic giants" showed how the Rydberg excitations, which are immersed in a sea of atoms, diffused from their original positions to their atomic neighbours, similar to the spreading of ink in water. Aided by a mathematical model the team of Prof. Weidemüller showed that the atomic sea crucially influences the energy transport from Rydberg atom to Rydberg atom.
"Now we are in a good position to control the quantum system and to study the transition from diffusive transport to coherent quantum transport. In this special form of energy transport the energy is not localised to one atom but is distributed over many atoms at the same time," explains Prof. Weidemüller. As with the light-harvesting complexes of photosynthesis, one central question will be how the environment of the nanoantennae influences the efficiency of energy transport and whether this efficiency can be enhanced by exploiting quantum effects. "In this way we hope to gain new insights into how the transformation of energy can be optimised in other synthetic systems as well, like those used in photovoltaics," the Heidelberg physicist points out.

Wednesday, 20 November 2013

Robotic Advances Promise Artificial Legs That Emulate Healthy Limbs

Robotic Advances Promise Artificial Legs That Emulate Healthy Limbs
 Recent advances in robotics technology make it possible to create prosthetics that can duplicate the natural movement of human legs. This capability promises to dramatically improve the mobility of lower-limb amputees, allowing them to negotiate stairs and slopes and uneven ground, significantly reducing their risk of falling as well as reducing stress on the rest of their bodies.


For the last decade, Goldfarb's team has been doing pioneering research in lower-limb prosthetics. It developed the first robotic prosthesis with both powered knee and ankle joints. And the design became the first artificial leg controlled by thought when researchers at the Rehabilitation Institute of Chicago created a neural interface for it.

In the article, Goldfarb and graduate students Brian Lawson and Amanda Shultz describe the technological advances that have made robotic prostheses viable. These include lithium-ion batteries that can store more electricity, powerful brushless electric motors with rare-Earth magnets, miniaturized sensors built into semiconductor chips, particularly accelerometers and gyroscopes, and low-power computer chips.

The size and weight of these components is small enough so that they can be combined into a package comparable to that of a biological leg and they can duplicate all of its basic functions. The electric motors play the role of muscles. The batteries store enough power so the robot legs can operate for a full day on a single charge. The sensors serve the function of the nerves in the peripheral nervous system, providing vital information such as the angle between the thigh and lower leg and the force being exerted on the bottom of the foot, etc. The microprocessor provides the coordination function normally provided by the central nervous system. And, in the most advanced systems, a neural interface enhances integration with the brain.

Unlike passive artificial legs, robotic legs have the capability of moving independently and out of sync with its users movements. So the development of a system that integrates the movement of the prosthesis with the movement of the user is "substantially more important with a robotic leg," according to the authors.

Not only must this control system coordinate the actions of the prosthesis within an activity, such as walking, but it must also recognize a user's intent to change from one activity to another, such as moving from walking to stair climbing.

Identifying the user's intent requires some connection with the central nervous system. Currently, there are several different approaches to establishing this connection that vary greatly in invasiveness. The least invasive method uses physical sensors that divine the user's intent from his or her body language. Another method -- the electromyography interface -- uses electrodes implanted into the user's leg muscles. The most invasive techniques involve implanting electrodes directly into a patient's peripheral nerves or directly into his or her brain. The jury is still out on which of these approaches will prove to be best. "Approaches that entail a greater degree of invasiveness must obviously justify the invasiveness with substantial functional advantage…," the article states.

There are a number of potential advantages of bionic legs, the authors point out.
Studies have shown that users equipped with the lower-limb prostheses with powered knee and heel joints naturally walk faster with decreased hip effort while expending less energy than when they are using passive prostheses.

In addition, amputees using conventional artificial legs experience falls that lead to hospitalization at a higher rate than elderly living in institutions. The rate is actually highest among younger amputees, presumably because they are less likely to limit their activities and terrain. There are several reasons why a robotic prosthesis should decrease the rate of falls: Users don't have to compensate for deficiencies in its movement like they do for passive legs because it moves like a natural leg. Both walking and standing, it can compensate better for uneven ground. Active responses can be programmed into the robotic leg that helps users recover from stumbles.
Before individuals in the U.S. can begin realizing these benefits, however, the new devices must be approved by the U.S. Food and Drug Administration (FDA).

Single-joint devices are currently considered to be Class I medical devices, so they are subject to the least amount of regulatory control. Currently, transfemoral prostheses are generally constructed by combining two, single-joint prostheses. As a result, they have also been considered Class I devices.

In robotic legs the knee and ankle joints are electronically linked. According to the FDA that makes them multi-joint devices, which are considered Class II medical devices. This means that they must meet a number of additional regulatory requirements, including the development of performance standards, post-market surveillance, establishing patient registries and special labeling requirements.

Another translational issue that must be resolved before robotic prostheses can become viable products is the need to provide additional training for the clinicians who prescribe prostheses. Because the new devices are substantially more complex than standard prostheses, the clinicians will need additional training in robotics, the authors point out.

In addition to the robotics leg, Goldfarb's Center for Intelligent Mechatronics has developed an advanced exoskeleton that allows paraplegics to stand up and walk, which led Popular Mechanics magazine to name him as one of the 10 innovators who changed the world in 2013, and a robotic hand with a dexterity that approaches that of the human hand.


Monday, 18 November 2013

New Hologram Technology Created With Tiny Nanoantennas

New Hologram Technology Created With Tiny Nanoantennas

Researchers have created tiny holograms using a "metasurface" capable of the ultra-efficient control of light, representing a potential new technology for advanced sensors, high-resolution displays and information processing.

The metasurface, thousands of V-shaped nanoantennas formed into an ultrathin gold foil, could make possible "planar photonics" devices and optical switches small enough to be integrated into computer chips for information processing, sensing and telecommunications, said Alexander Kildishev, associate research professor of electrical and computer engineering at Purdue University.

Laser light shines through the nanoantennas, creating the hologram 10 microns above the metasurface. To demonstrate the technology, researchers created a hologram of the word PURDUE smaller than 100 microns wide, or roughly the width of a human hair.
"If we can shape characters, we can shape different types of light beams for sensing or recording, or, for example, pixels for 3-D displays. Another potential application is the transmission and processing of data inside chips for information technology," Kildishev said. "The smallest features -- the strokes of the letters -- displayed in our experiment are only 1 micron wide. This is a quite remarkable spatial resolution."

Findings are detailed in a research paper appearing on Friday (Nov. 15) in the journal Nature Communications.

Metasurfaces could make it possible to use single photons -- the particles that make up light -- for switching and routing in future computers. While using photons would dramatically speed up computers and telecommunications, conventional photonic devices cannot be miniaturized because the wavelength of light is too large to fit in tiny components needed for integrated circuits.

Nanostructured metamaterials, however, are making it possible to reduce the wavelength of light, allowing the creation of new types of nanophotonic devices, said Vladimir M. Shalaev, scientific director of nanophotonics at Purdue's Birck Nanotechnology Center and a distinguished professor of electrical and computer engineering.

"The most important thing is that we can do this with a very thin layer, only 30 nanometers, and this is unprecedented," Shalaev said. "This means you can start to embed it in electronics, to marry it with electronics."

The layer is about 1/23rd the width of the wavelength of light used to create the holograms.
The Nature Communications article was co-authored by former Purdue doctoral student Xingjie Ni, who is now a postdoctoral researcher at the University of California, Berkeley; Kildishev; and Shalaev.

Under development for about 15 years, metamaterials owe their unusual potential to precision design on the scale of nanometers. Optical nanophotonic circuits might harness clouds of electrons called "surface plasmons" to manipulate and control the routing of light in devices too tiny for conventional lasers.

The researchers have shown how to control the intensity and phase, or timing, of laser light as it passes through the nanoantennas. Each antenna has its own "phase delay" -- how much light is slowed as it passes through the structure. Controlling the intensity and phase is essential for creating working devices and can be achieved by altering the V-shaped antennas.

The work is partially supported by U.S. Air Force Office of Scientific Research, Army research Office, and the National Science Foundation. Purdue has filed a provisional patent application on the concept.


Friday, 15 November 2013

Machines Learn to Detect Breast Cancer

Machines Learn to Detect Breast Cancer

Software that can recognize patterns in data is commonly used by scientists and economics. Now, researchers in the US have applied similar algorithms to help them more accurately diagnose breast cancer. The researchers outline details in the International Journal of Medical Engineering and Informatics.


Duo Zhou, a biostatistician at pharmaceutical company Pfizer in New York and colleagues Dinesh Mittal and Shankar Srinivasan of the University of Medicine and Dentistry of New Jersey, point out that data pattern recognition is widely used in machine-learning applications in science. Computer algorithms trained on historical data can be used to analyze current information and detect patterns and then predict possible future patterns. However, this powerful knowledge discovery technology is little used in medicine.

The team suggested that just such an automated statistical analysis methodology might readily be adapted to a clinical setting. They have done just that in using an algorithmic approach to analyzing data from breast cancer screening to more precisely recognize the presence of malignant tumors in breast tissue as opposed to benign growths or calcium deposits. This could help improve outcomes for patients with malignancy but also reduce the number of false positives that otherwise lead patients to unnecessary therapeutic, chemotherapy or radiotherapy, and surgical interventions.

The machine learning approach takes into account nine characteristics of a minimally invasive fine needle biopsy, including clump thickness, uniformity of cell size, adhesions, epithelial cell size, bare cell nuclei and other factors. Trained on definitive data annotated as malignant or benign, the system was able to correlate the many disparate visual factors present in the data with the outcome. The statistical model thus developed could then be used to test new tissue samples for malignancy.


Monday, 11 November 2013

First Virtual Surgery With Google Glass

First Virtual Surgery With Google Glass

 A University of Alabama at Birmingham surgical team has performed the first surgery using a virtual augmented reality technology called VIPAAR in conjunction with Google Glass, a wearable computer with an optical head-mounted display. The combination of the two technologies could be an important step toward the development of useful, practical telemedicine.


VIPAAR, which stands for Virtual Interactive Presence in Augmented Reality, is a UAB-developed technology that provides real time, two-way, interactive video conferencing.

UAB orthopedic surgeon Brent Ponce, M.D., performed a shoulder replacement surgery on Sept. 12, 2013 at UAB Highlands Hospital in Birmingham. Watching and interacting with Ponce via VIPAAR was Phani Dantuluri, M.D., from his office in Atlanta.

Ponce wore Google Glass during the operation. The built-in camera transmitted the image of the surgical field to Dantuluri. VIPAAR allowed Dantuluri, who saw on his computer monitor exactly what Ponce saw in the operating room, to introduce his hands into the virtual surgical field. Ponce saw Danturuli's hands as a ghostly image in his heads-up display.

"It's not unlike the line marking a first down that a television broadcast adds to the screen while televising a football game," said Ponce. "You see the line, although it's not really on the field. Using VIPAAR, a remote surgeon is able to put his or her hands into the surgical field and provide collaboration and assistance."

The two surgeons were able to discuss the case in a truly interactive fashion since Dantuluri could watch Ponce perform the surgery yet could introduce his hands into Ponce's view as if they were standing next to each other.

"It's real time, real life, right there, as opposed to a Skype or video conference call which allows for dialogue back and forth, but is not really interactive," said Ponce.

UAB physicians say this kind of technology could greatly enhance patient care by allowing a veteran surgeon to remotely provide valuable expertise to less experienced surgeons. VIPAAR owes its origins to UAB neurosurgeon Barton Guthrie, M.D., who some ten years ago grew dissatisfied with the current state of telemedicine.

"So called 'telemedicine' was little more than a telephone call between two physicians," Guthrie recalled. "A surgeon in a small, regional hospital might call looking for guidance on a difficult procedure -- one that perhaps I'd done a hundred times but he'd only done once or twice. How advantageous to the patient would it be if we could get our hands and instruments virtually into the field of a surgeon who has skills and training and lacks only experience?"

"The paradigm of the telephone consultation is, 'Do the best you can and send the patient to me when stable', while the paradigm with VIPAAR is 'Get me to the patient.' Let's get my expertise and experience to the physician on the front line, and I think we can implement that concept with these technologies," Guthrie said.

Ponce says VIPAAR allows the remote physician to point out anatomy, provide guidance or even demonstrate the proper positioning of instruments. He says it could be an invaluable tool for teaching residents, or helping surgeons first learning a new procedure.

"This system is able to provide that help from an expert who is not on site, guiding and teaching new skills while enhancing patient safety and outcomes," he said. "It provides a safety net to improve patient care by having that assistance from an expert who is not in the room."
In 2003, Guthrie approached the Enabling Technology Laboratory in UAB's Mechanical Engineering Department, which was already at work on virtual, interactive technologies, with the idea of using two-way video to enhance surgery. The resulting technology became VIPAAR, now a start-up company at Innovation Depot, a technology business incubator partnered with UAB.

"VIPAAR brings experts or collaborators to the site of need, in any field where a visual collaboration would be beneficial," said Drew Deaton, CEO of VIPAAR. "VIPAAR uses video on mobile devices to allow experts or collaborators to connect in real time and not only see what might need to be fixed, corrected or solved, but also be able to reach in, using tools or just their hands, and demonstrate. It's like being there, side by side with someone when you might be a thousand miles, or 10 thousand miles away."

Deaton says potential applications for VIPAAR go beyond medicine and surgery. He says field service is a burgeoning area, from a service call to fix a home heating system, to keeping an industrial manufacturing process on line and running.

"When there is a breakdown, the time to respond and resolve an issue is critical in the field service world," Deaton said. "VIPAAR is helping field service engineers solve problems as fast as possible and get their customers up to speed as fast as possible."

Ponce and Dantuluri were pleased with the results of their interactive collaboration. Adjustments will be needed to fine tune the marriage between VIPAAR and Google Glass, but the promise of useful, practical telemedicine is drawing ever closer. Deaton calls it one more step on the technology evolutionary ladder.

"Today, you can't imagine having a phone without the capability to take picture, or a video," he said. "I can't imagine, five years from now, not being able to use a smart phone to connect to an expert to solve my problem. And have that person reach in and show me how to solve that problem, because the technology is advancing rapidly and we're bringing this technology to market today."


Sunday, 10 November 2013

Inkblots Improve Security of Online Passwords

Inkblots Improve Security of Online Passwords

Carnegie Mellon University computer scientists have developed a new password system that incorporates inkblots to provide an extra measure of protection when, as so often occurs, lists of passwords get stolen from websites.


This new type of password, dubbed a GOTCHA (Generating panOptic Turing Tests to Tell Computers and Humans Apart), would be suitable for protecting high-value accounts, such as bank accounts, medical records and other sensitive information.

To create a GOTCHA, a user chooses a password and a computer then generates several random, multi-colored inkblots. The user describes each inkblot with a text phrase. These phrases are then stored in a random order along with the password. When the user returns to the site and signs in with the password, the inkblots are displayed again along with the list of descriptive phrases, the user then matches each phrase with the appropriate inkblot.

"These are puzzles that are easy for a human to solve, but hard for a computer to solve, even if it has the random bits used to generate the puzzle," said Jeremiah Blocki, a Ph.D. student in computer science who developed GOTCHAs along with Manuel Blum, professor of computer science, and Anupam Datta, associate professor of computer science and electrical and computer engineering.

These puzzles would prove significant when security breaches of websites result in the loss of millions of user passwords - a common occurrence that has plagued such companies as LinkedIn, Sony and Gawker. These passwords are stored as cryptographic hash functions, in which passwords of any length are converted into strings of bits of uniform length. A thief can't readily decipher these hashes, but can mount what's called an automated offline dictionary attack. Computers today can evaluate as many as 250 million possible hash values every second, Blocki noted.

Given the continued popularity of easy passwords, such as "123456" or "password," it's not always difficult to crack these hashes. But even hard passwords are vulnerable to the latest brute force methods, Blocki said.In the case of a GOTCHA, however, a computer program alone wouldn't be enough to break into an account.

"To crack the user's password offline, the adversary must simultaneously guess the user's password and the answer to the corresponding puzzle," Datta said. "A computer can't do that alone. And if the computer must constantly interact with a human to solve the puzzle, it no longer can bring its brute force to bear to crack hashes."

The researchers described GOTCHAs at the Association for Computing Machinery's Workshop on Artificial Intelligence and Security in Berlin, Germany, Nov. 4.
Because the user's descriptive phrases for inkblots are stored, users don't have to memorize their descriptions, but have to be able to pick them out from a list. To see if people could do this reliably, the researchers performed a user study with 70 people hired through Mechanical Turk. First, each user was asked to describe 10 inkblots with creative titles, such as "evil clown" or "lady with poofy dress." Ten days later, they were asked to match those titles with the inkblots. Of the 58 participants who participated in the second round of testing, one-third correctly matched all of the inkblots and more than two-thirds got half right.

Blocki said the design of the user study, including financial incentives that were too low, might account for the less-than-stellar performance. But he said there also are ways to make descriptions more memorable. One way would be to use more elaborate stories, such as "a happy guy on the ground protecting himself from ticklers."


Friday, 8 November 2013

Virtually Numbed: Immersive Video Gaming Alters Real-Life Experience

Virtually Numbed: Immersive Video Gaming Alters Real-Life Experience



Role-playing video games can alter our experience of reality and numb us to important real-life experiences, a new study finds.

Spending time immersed as a virtual character or avatar in a role-playing video game can numb you to realizing important body signals in real life. This message comes from Ulrich Weger of the University of Witten/Herdecke in Germany and Stephen Loughnan of Melbourne University in Australia, in an article in the journal Psychonomic Bulletin & Review, published by Springer.
The researchers studied what happens when gamers take on the role of -- and identify with -- a nonhuman character such as an avatar during immersive video gaming, and how it especially influences their experience of pain. Avatars often have automaton-like, robotic characteristics such as mechanistic inertness, rigidity and a lack of emotion and warmth.

Participants were asked how much time they spend each week playing video games. Their responses were then correlated with a measure of pain tolerance by counting the number of paperclips that they could retrieve from ice-cold water. In a second experiment, participants played either an immersive or a nonimmersive computer game before taking part in the same pain-resistance task. The immersive video-game players exhibited a reduced sensitivity to pain and removed significantly more paperclips from ice-cold water. They were also more indifferent to people depicted as experiencing displeasure than were the nonimmersive players.
Weger and Loughnan found that by taking on and acting from the perspective of an automaton-like avatar, people are desensitized to pain in themselves and in others. The point of view adopted during video gaming appears to have implications that extend beyond the virtual environment, into real life.

Dr. Weger points to what he sees as a misleading development: that the human-machine boundary is increasingly being blurred, either by humans entering virtual machines/robots, or by anthropomorphizing, in other words adding human qualities to animated figures and toys. Machines are being programmed to attract human inclinations, while virtual characters and robots have started to perform tasks or roles that were traditionally held by humans, such as that of robot counselling therapists. In such an environment it becomes increasingly easy and normal to regard artificial beings as being akin to human beings.
"We see this blurring as a reality of our time but also as a confused and misleading development that has begun to shape society," says Weger. "We believe this should be balanced by other developments, for example, by working on our awareness of what it really means to be human. We should also look into how we can best make use of the beneficial applications of robotic or artificial intelligence advances, so as to be able to use our freed up resources and individual potentials wisely rather than becoming enslaved by those advances."