Tampilkan postingan dengan label Technology. Tampilkan semua postingan

Beauty Now in the Eye of the Algorithm

Tidak ada komentar


New image recognition technology judges photographic aesthetics.

New technology from Xerox can sort photos not just by their content but also according to their aesthetic qualities, such as which portraits are close-in and well-lit, or which wildlife shots are least cluttered.
Still in the prototype stage, the technology could eventually help with tasks like choosing which of hundreds of digital photos taken on a family vacation should appear in a photo album. It could help stock agencies sort photos by their characteristics, and it could be deployed inside a camera to help people delete lower-quality scenes quickly, saving on storage space and hassle.

"What they show is that now you don't need a human to select images that are going to be judged beautiful," says Aude Olivia, an associate professor of brain and cognitive sciences at MIT, who also works on image recognition. "You can run the algorithm, and it will give a good estimate."
The technology—developed at the Xerox Research Center Europe in Grenoble, France—is slated for beta testing with Xerox corporate partners next year, says Craig Saunders, manager of the computer vision research group there. These partners include graphic design firms, online photo-book companies, and stock agencies, all of which might want new ways to sort and find photos.

The Xerox system learns about quality photography by studying photos that had previously been chosen for public display in online photo albums, such public ones shown on Facebook, or photos tagged as high quality on Flickr. Then it notes common characteristics of these photos.
Not surprisingly, these characteristics often correspond to what experts already understand about good photographs. The best portraits of people, for example, have indirect lighting and blurry or monochromatic backgrounds that help keep the focus on the person. Good beach photos often include silky-looking waves, a trick achieved through slow shutter speeds. And many kinds of photos are appealing because they follow the "rule of threes," with subjects divided among three zones in the photo. "We try to learn what it is about these features that makes photos 'good,'" says Saunders. (Examples and demonstrations can be found here.)

We Will All Talk to Computers

Tidak ada komentar
Ben Bajarin is the Director of Consumer Technology Analysis and Research at Creative Strategies, Inc, a technology industry analysis and market intelligence firm located in Silicon Valley.

When Apple showed the world Siri, I believe they showed us the next major man-to-machine user interface.
The idea of talking to computers is nothing new. It has, of course, been featured in sci-fi novels, movies and TV shows for years now. Even software itself has shown shades of voice input as the next user input for over 10 years. The challenge had always been bringing it to the mass market. This is what Apple plans to do with Siri on the iPhone 4S.

(MORE: Apple iPhone 4S Review: It's the iPhone 4, Only More So)

This technology has been in development for quite a while and is getting progressively better. Besides bringing
it to the mass market, another challenge has been making it useful by going beyond simple dictation. One of the most impressive elements of Siri is not just the ability to do voice-to-text dictation, but its ability to turn natural-language directives into action.

What I mean by that is that I can use my voice to say, "Remind me to feed my goats when I get home." Because Siri is trained to know where my house is and the iPhone 4S has GPS, the second I drive into my driveway, I get a reminder that tells me to feed the goats. I live on a farm and this is quite handy for me.
It's a valuable proposition to be able to use voice commands to create calendar items, search the web, get abstract information like how many feet are in a mile, search local information, set alarms, check the weather, and much more. This can be done because Siri is tied to some very powerful databases and, through its AI and voice comprehension technology, can deliver some amazingly accurate information that has already proven helpful to me and many who have used this technology.

What is fascinating is that as I have been using Siri, the experience actually feels more like a conversation than me ordering my iPhone to do things. This is because when you use your voice to create an action, Siri asks you relevant questions in order to make sure it takes the correct action.
(MORE: Siri: Can Apple Sell the Concept of Natural Language Computing?)

For example, the first time I told it to call my dad, Siri asked, "What is your father's name?" I responded "Tim" and Siri said, "Do you want me to remember that Tim Bajarin is your father?" I answered yes and Siri acknowledged that it would remember that Tim Bajarin is my father.

Another example was when I was in an unfamiliar part of a city. I brought up the voice prompt and asked, "How do I get home?" Because I had set Siri up to know my home location, it then quickly gave me directions to my house using Google Maps.

Experiences like this cause you to realize that we are only just starting to scratch the surface of using our voices to interact with personal computers.

Rhythmic Secrets of The Brain Discover UCLA Researchers

Tidak ada komentar

Neuroscientists have long pondered the mechanism behind learning and memory formation in the human brain. On the cellular level, it's generally agreed that we learn when stimuli are repeated frequently enough that our synapses - the gap-connections between neurons - respond and become stronger. Now, a team of UCLA neuro-physicists has discovered that this change in synaptic strength actually has an optimal "rhythm," or frequency, a finding that could one day lead to new strategies for treating learning disabilities.

"Many people have learning and memory disorders, and beyond that group, most of us are not Einstein or Mozart," said Mayank R. Mehta, one of the study's co-investigators. "Our work suggests that some problems with learning and memory are caused by synapses not being tuned to the right frequency."

The tendency for connections between neurons to grow stronger in response to repeated stimuli is known as synaptic plasticity. The series of signals one neuron gets from the others to which it's connected, dubbed "spike trains," arrive with variable frequencies and timing, and it's these trains that induce formation of stronger synapses- the very basis for "practice makes perfect."

In previous studies, it was shown that very high frequency neuronal stimulation (about 100 spikes per second) led to stronger connecting synapses, while stimulation at a much lower frequency (one spike per second) actually reduced synaptic strength. But real-life neurons, performing routine behavioral tasks, only fire 10 or so consecutive spikes, not hundreds, and they do this at a far lower frequency - around 50 spikes per second.
Achieving experimental spike rates that more closely approximate real life has proved rather elusive, however. Mehta explains one of the variables they encountered: "Spike frequency refers to how fast the spikes come. Ten spikes could be delivered at a frequency of 100 spikes a second or at a frequency of one spike per second."

But Mehta and his co-investigator, Arvind Kumar, didn't let that hurdle stop them. Instead, they worked out a complex mathematical model and validated it with actual data from their experiments. Now able to generate spike patterns closer to those that occur naturally, the team discovered that, contrary to their predictions, neuron stimulation at the highest frequencies wasn't the ideal way to bolster synaptic strength.
"The expectation, based on previous studies, was that if you drove the synapse at a higher frequency, the effect on synaptic strengthening, or learning, would be at least as good as, if not better than, the naturally occurring lower frequency," Mehta said. "To our surprise, we found that beyond the optimal frequency, synaptic strengthening actually declined as the frequencies got higher."
The realization that synapses have optimal frequencies for learning prompted Mehta and Kumar to determine whether synapse location on a neuron had any specific role. They discovered that the more distant the synapse was from the neuron's bulbous main body, the higher the frequency it required for optimal strengthening. "Incredibly, when it comes to learning, the neuron behaves like a giant antenna, with different branches of dendrites tuned to different frequencies for maximal learning," Mehta said.
The team also revealed that aside from having optimal frequencies at which maximal learning occurs, synapses also strengthen best when those frequencies are exactly-timed in perfect rhythms. Take away the beat, they found, and even with the ideal frequency, synaptic strengthening is appreciably compromised.
The image shows a neuron with a tree trunk-like dendrite. Each triangular shape touching the dendrite represents a synapse, where inputs from other neurons, called spikes, arrive (the squiggly shapes). Synapses that are further away on the dendritic tree from the cell body require a higher spike frequency (spikes that come closer together in time) and spikes that arrive with perfect timing to generate maximal learning (Image: UCLA Newsroom)
As if these remarkable revelations weren't enough, they also discovered that a synapse's optimal frequency changes once it learns. The researchers feel that understanding of this fundamental could yield insight into treatments for conditions related to memory dysfunction (or the need to forget), such as post-traumatic stress disorder.

With additional study, these findings could possibly lead to the development of new drugs capable of "re-calibrating" faulty brain rhythms in people with memory or learning disorders. "We already know there are drugs and electrical stimuli that can alter brain rhythms," Mehta said.
"Our findings suggest that we can use these tools to deliver the optimal brain rhythm to targeted connections to enhance learning."

The research paper entitled Frequency-dependent changes in NMDAR-dependent synaptic plasticity is available online at Frontiers in Computational Neuroscience.

Source: UCLA

Graphine Next Generation Computer Chip

Tidak ada komentar

Since its discovery in 2004, the two-dimensional layer of carbon atoms known as graphene has promised to revolutionize materials science, enabling flexible, transparent touch displays, lighter aircraft, cheaper batteries and faster, smaller electronic devices. Now in what could be a key step towards replacing silicon chips in computers, researchers at the University of Manchester have sandwiched two sheets of graphene with another two-dimensional material, boron nitride, to create what they have dubbed a graphene "Big Mac".

  Researchers have sandwiched layers of graphene between layers of boron nitrate to create a...

The researchers used two layers of boron nitrate to not only separate two graphene layers, but also to see how graphene reacts when it is completely encapsulated by another material. The researchers say this has allowed them, for the first time, to observe how graphene behaves when unaffected by the environment and demonstrates how graphene inside electronic circuits will probably look in the future.
"Creating the multilayer structure has allowed us to isolate graphene from negative influence of the environment and control graphene's electronic properties in a way it was impossible before," said Dr Leonid Ponomarenko. "So far people have never seen graphene as an insulator unless it has been purposefully damaged, but here high-quality graphene becomes an insulator for the first time."
"Leaving the new physics we report aside, technologically important is our demonstration that graphene encapsulated within boron nitride offers the best and most advanced platform for future graphene electronics," added Professor Andre Geim who, along with Professor Kostya Novoselov, was awarded the Nobel Prize for Physics last year for the discovery of graphene at the University of Manchester in 2004.
"It solves several nasty issues about graphene's stability and quality that were hanging for a long time as dark clouds over the future road for graphene electronics. We did this on a small scale but the experience shows that everything with graphene can be scaled up," said Geim. "It could be only a matter of several months before we have encapsulated graphene transistors with characteristics better than previously demonstrated."
The research team's paper, Tunable metal-insulator transition in double-layer graphene heterostructures appears in the journal Nature Physics.

more in http://www.gizmag.com/graphene-big-mac/20116/