Haptix promises to transform any flat surface into a 3D multitouch surface to control your computer, TV, or any other screen. One out of many potential uses is when you’re cooking, and you don’t want to touch your tablet since your hands are dirty. With Haptix you could use your table to scroll through recipes instead. Interesting potential device.
After five years of explosive growth sales of high-end smartphones have hit a plateau and the $2 trillion industry – telecom carriers, handset makers and content providers – is buckling up for a bumpier ride as growth shifts to emerging markets, primarily in Asia. Jeremy Wagstaff and Lee Chyen Yee report for Reuters.
This year, the number of mobile Internet users in the developing world will overtake those in the developed world for the first time – growing 27 times since 2007, compared to the developed world’s fourfold growth, according to estimates from the International Telecommunications Union (ITU).
“The center of gravity in the mobile ecosystem is likely to shift from the United States and Western Europe toward Asia,” Mary Ellen Gordon, director at mobile advertiser Flurry Inc, said in an emailed interview.
Poor network coverage or the high cost of 3G access relative to phone and SMS services still hold many users back. Last year, according to market research firm Euromonitor, 62 percent of all mobile phones sold in China were smartphones, but only 16 percent of subscribers had access to a mobile Internet connection.
The three carriers – China Mobile, China Unicom and China Telecom – typically dole out billions of dollars of handset subsidies to entice users to subscribe to their networks, dragging down profit margins.
The personal computer is dying. Its place in our lives as the primary means of computing will soon end. Mobile computing—the cell phone in your pocket or the tablet in your purse—has been a great bridging technology, connecting the familiar past to a formative future. But mobile is not the destination. In many ways mobile devices belong more to the dying PC model than to the real future of computing. From technologyreview.com:
Chips and sensors are finding their way into clothing, personal accessories, and more. These devices are capturing information whose impact is not yet meaningful to most people. But it will be soon enough. The question we need to answer is: how will these intertwined systems of hardware and software be designed to meaningfully add to our lives and to society?
Today we are enjoying what computing has done to enhance our lives, but we do not like having to baby-sit all the devices that give us access. We have to tell them what to do. The next wave of computing devices will be different because they won’t wait for our instructions. They will feel more like natural extensions of what we do in our lives. The hardware and software technologies behind this ubiquitous-computing model will become the focus of a radically changed computing industry.
Your cellphone is ratting you out. Stores can keep track of who you are and what you buy — unless you pay cash. Now they can also use your cellphone — even without your permission — to find out even what you’re just looking at or trying on. Latimes reports:
Nordstrom, the high-end department store, began using a technology that can use the Wi-Fi signals in shoppers’ smartphones to follow them virtually throughout the store, from display to display, item to item, and check how long they spent looking at what, just as websites can do now.
Nordstrom isn’t the only company trying this out. At least Nordstrom put up a sign to let customers know what it was doing, and not all of them liked it.
But can stores legitimately argue that they don’t have to tell us they’re shoplifting our data, and presuming that we give permission for them to do so just by setting foot in their stores?
There is no privacy anymore. Walk by certain stores where I live, with default settings, and you start getting messages from retailers. What do they do with this data? See more: Trading Privacy for Convenience
QUALCOMMVlog imagined what a World Without Mobile would look like.
I don’t pay much credence to the theory that there is wilful collusion between government and corporate entities, but their is no doubt that our activities, online and off, are being tracked, and the resulting data can have many different uses for many different entities. We are the product.
The current surveillance state is a result of a government/corporate partnership, and our willingness to give up privacy for convenience.
If the government demanded that we all carry tracking devices 24/7, we would rebel. Yet we all carry cell phones… If the government demanded that we give them access to all the photographs we take, and that we identify all of the people in them and tag them with locations, we’d refuse. Yet we do exactly that on Flickr and other sites.
One of the barriers to wearable technology: are we ready to get so intimate with technology that we’re prepared to wear it? Intel and its team of futurologists and anthropologists have a vision of a world where the technology is not an adjunct (as the mobile phone or the tablet is now) but embedded in our lives, generating and mining data in a way that’s functional and useful to us. The Guardian reports:
While Google Glass is a proof-of-concept device, it points the way to a paradigm that will become increasingly part of our lives. In effect, Google Glass is a personal digital assistant that you wear – taking photographs, pointing the way to the pub where you’re meeting a friend, overlaying the view of an unfamiliar city with an augmented reality layer of information about the shops, hotels, places to eat, transport options and cultural attractions – is in many ways a clumsy first step into a world that, like it or not, is going to collect, use and exploit the data we all generate all day long.
Viewed through Intel’s crystal ball, in the future we’ll have devices that second-guess us, or make intelligent connections on our behalf. One narrative constructed to exemplify this is that of a glossy middle-class thirtysomething woman, whose personal device deduces from her existing music collection that she would like another band that’s coming to town, and proactively buys tickets for a forthcoming gig – calculating that the tickets are already selling like hot cakes, so if she isn’t pleased with the decision to snap them up, those tickets will find a willing buyer.
Fujitsu Laboratories has developed a next generation user interface which can accurately detect the users finger and what it is touching, creating an interactive touchscreen-like system, using objects in the real word.
We think paper and many other objects could be manipulated by touching them, as with a touchscreen. This system doesn’t use any special hardware; it consists of just a device like an ordinary webcam, plus a commercial projector. Its capabilities are achieved by image processing technology.
Until now, gesturing has often been used to operate PCs and other devices. But with this interface, we’re not operating a PC, but touching actual objects directly, and combining them with ICT equipment.
The system is designed not to react when you make ordinary motions on a table. It can be operated when you point with one finger. What this means is, the system serves as an interface combining analog operations and digital devices.
New findings from Ericsson ConsumerLab have underlined the crucial role of good connectivity and network quality in smartphone user experience and operator loyalty.
More than 3,500 smartphone owners in Finland, Switzerland and the Netherlands took part in the Ericsson ConsumerLab survey, the results of which are now available in a report called “Smartphone Usage Experience – the importance of network quality and its impact on user satisfaction”.
Key findings include:
- Reliability is valued: broad coverage where needed and a fast and reliable network are the strongest drivers of network satisfaction.
- Satisfaction scores higher: users who are very satisfied with the performance of their network have a positive effect on the Net Promoter Score.
Read the report here (.pdf).
From a four-part series exploring four dimensions of using iPads in educational settings, examining how teachers can take students on a journey from (1) consumption of media, to (2) curation, (3) creation, and (4) connection. Below is an excerpt from “consumption”:
In the apocryphal photo of the iPad, the tablet rests in the lap of Steve Jobs, sitting on the stage at the iPad release demonstration, reclined in a leather chair. This was a device made for reading and watching, for sitting back, for passively consuming media. One of the signature challenges of the surge of interest in iPads is helping educators imagine the device as more than a library of books or a rolodex of apps, but as a flexible, mobile device for creating multimedia performances of understanding. Educators using iPads should start by thinking about how the device can foster critical reading of text, images, audio, and film, but consumption should be the point of departure on a journey towards more active student engagement.
To oversimplify, there are two kinds of reading that students are asked to do in school settings: focused and connected. In the focused reading mode, we hope young people will engage deeply with a text. As Mark Ott, the chair of the English Department at Deerfield Academy recently told me, “Students used to sit at a desk with nothing but a copy of Thoreau’s Walden and experience sustained engagement with Thoreau’s ideas. We want to preserve that experience in a world where devices are constantly competing for their attention.” Whether the copy of Walden is the $4.99 paperback or the free digital copy from the iBooks library, educators still believe in the importance of focused reading.
Some of the hottest consumer products continue to be Web-connected multipurpose devices, such as tablets and smartphones. Since the advent of the app store, there seems to be no limit to what these gadgets can do. Nothing is just a phone, a camera or a television anymore. From the WSJ:
Arguing for multi-purpose devices, Berkowitz writes:
It’s true that my GPS-enabled camera takes better pictures than my smartphone, and can tell me they were taken in California. But learning to share the pictures with friends takes more effort than it is worth. In the new marketplace, devices people can’t master in five minutes will result in a lot of returned items, which very quickly makes a product unprofitable.
Multipurpose devices offer other financial edges for manufacturers and consumers. A consumer would have to buy 10 different devices to reproduce the most popular functions in a single smartphone or tablet. Smartphones and other multipurpose devices are produced in high volumes, too, which in turn drives manufacturer costs and consumer prices lower.
Saffer writes an opposing view:
It’s not just professionals who care about quality, either. Yes, the speaker on a phone is good enough to listen to a song in a pinch. But to really enjoy the music, even your multipurpose device must be supplemented with a product like the brilliant Jambox, designed to play music loudly and well.
Makers of multipurpose devices are mostly unwilling to spend tens (and sometimes hundreds) of dollars per unit for each feature to be as good as those delivered by a high-quality single-purpose device; especially when it would add weight or heat, or eat up battery power.
Dustin Adams, a Ph.D student at the University of California at Santa Cruz, has teamed up with colleagues at his school in order to craft an app that helps visually impaired users line up the ideal snapshot.
The researchers also built their own app that dispenses with a “shutter” button as it can be hard for people with a visual impairment to locate. Instead, the app snaps a picture in response to a simple upward swipe gesture. And it merges face detection and the voice accessibility features so that the phone speaks out loud the number of faces detected, helping the user get everyone in shot. Audio cues help get the main subject of a shot in frame and in focus.
As soon as the app’s camera mode is turned on, the phone also begins recording a 30-second audio file which can be restarted at any time with a double tap to the screen. This is to help with photo organising and sharing – and is used as an aide-memoire as to who is in shot. The user can choose to save this sound file along with the time and date, and GPS data that is translated into audio giving the name of the neighbourhood, district or city the shot was taken in.
How NTT DOCOMO’s concept video shows examples of current, soon to be enhanced, offers to improve users daily lives.
Contactless and touch free technology has been coming to a store near you for a number of years now, however it seems that 2013 is the first year that it may become truly integrated.
There are innumerable ways for users to use touch-free technology in one way or another and the benefits are significant. Here are some ways this contactless tech is being integrated into our lives.
One of the big changes that we’ve seen in the last 12 months or so is the introduction of contactless payments. Take in the UK for instance bus payments once paid for via prepaid travel cards can now be achieved via bank cards, as can payments in shops and for services. Payments are generally limited to relatively small sums.
In many cases these payments are also being mobilised for mobile use, allowing users with NFC chipped mobile phones to pay via an online wallet from their phone. With smart phone saturation reaching 75% in the UK, making payments with your mobile may become increasingly more popular. Payments are usually made against modified chip and pin devices in shops, bus travel card readers or via kiosks and other similar machines.
Other examples of contactless payment include sticker strips that can be added to the back of phones and are connected with bank accounts. These strips use the same tech as the NFC enabled chips, but can be used with phones without the in-built technology.
PayPal has also created its own alternative to the technology. Users can utilise their PayPal accounts and use the technology on devices without NFC. PayPal users pay with their handsets by entering their phone number and a pin into the point of sale device. The option is becoming increasingly available in the US, but is more similar to cardless payment than contactless.
Gesture control is still in its infancy in a consumer tech environment; however expect to see it everywhere soon. The technology uses sensors and electrical devices cameras to allow users to control their screens.
Users control through a variety of pre-defined movements and then the device reacts in an apt manner. Samsung has integrated this technology in a number of its higher end TVs, replacing most of the remote control’s requirements. Users can switch channels, change volume and control the TV by simply gesturing toward the TV with their hands in a specific manner. You can also seen this technology on tablets, phones and other high tech ProTouch kiosks with suitable displays and technology.
Hands free and contactless technology is the next little revolution that we’ll find ourselves taking part in. As with most technology, without realising you’ll be using it everyday as it invisibly integrates in to your life.
John Self writes articles on technology and enjoys keeping up with the latest tech happenings.
To date, many engineers, designers and user-interface experts have worked hard to make reading on an e-reader or tablet as close to reading on paper as possible. E-ink resembles chemical ink and the simple layout of the Kindle’s screen looks like a page in a paperback. Likewise, Apple’s iBooks attempts to simulate the overall aesthetic of paper books, including somewhat realistic page-turning. Scientific American reports on how research suggests that devices can prevent readers from wholly absorbing longer texts.
How exactly does the technology we use to read change the way we read? How reading on screens differs from reading on paper is relevant not just to the youngest among us, but to just about everyone who reads — to anyone who routinely switches between working long hours in front of a computer at the office and leisurely reading paper magazines and books at home; to people who have embraced e-readers for their convenience and portability, but admit that for some reason they still prefer reading on paper; and to those who have already vowed to forgo tree pulp entirely. As digital texts and technologies become more prevalent, we gain new and more mobile ways of reading — but are we still reading as attentively and thoroughly? How do our brains respond differently to onscreen text than to words on paper? Should we be worried about dividing our attention between pixels and ink or is the validity of such concerns paper-thin?
An emerging collection of studies emphasizes that in addition to screens possibly taxing people’s attention more than paper, people do not always bring as much mental effort to screens in the first place. Subconsciously, many people may think of reading on a computer or tablet as a less serious affair than reading on paper. Based on a detailed 2005 survey of 113 people in northern California, Ziming Liu of San Jose State University concluded that people reading on screens take a lot of shortcuts — they spend more time browsing, scanning and hunting for keywords compared with people reading on paper, and are more likely to read a document once, and only once.