Here’s Google Artificial Imagination applied to a popular movie scene, as if “Fear & Loathing in Las Vegas” isn’t trippy enough.
Just a quick experiment visualizing the HeatStroke tune:
I rendered the hi hat, snare, bass drum and sequencer tracks to separate audio files. Then modeled a drum kit in 3ds Max, applied a FumeFX modifier to each part. Then I used each audio track to determine the amount of fuel the component has. Simulated in relatively low detail due to time constraints, but the idea seems to work.
Google has modified its image recognition neural net with a feedback loop.. and it now “dreams up” images of its own, even from nothing…
But if the neural network is tasked with finding a more complex feature – such as animals – in an image, it ends up generating a much more disturbing hallucination:
Ultimately, the software can even run on an image which is nothing more than random noise, generating features that are entirely of its own imagination.
“One way to visualise what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation,” they add. “Say you want to know what sort of image would result in ‘banana’. Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana.”
The image recognition software has already made it into consumer products. Google’s new photo service, Google Photos, features the option to search images with text: entering “dog”, for instance, will pull out every image Google can find which has a dog in it (and occasionally images with other quadrupedal mammals, as well).
So there you have it: Androids don’t just dream of electric sheep; they also dream of mesmerising, multicoloured landscapes.
Intel unveils button-sized Curie module to power future wearables
Intel has today unveiled Curie, a low-powered module no bigger than a button, as part of its vision to lead in the wearables field. Company CEO Brian Krzanich announced the module, which will be built on a tiny new chip called the Quark SE, during his keynote at CES in Las Vegas — a year on from announcing the Intel Edison platform.
The module incorporates the low-power 32-bit Quark microcontroller, 384kB of flash memory, motion sensors, Bluetooth LE and battery-charging capabilities in order to power the very smallest of devices. Intel is hoping Curie will prove the flexible solution designers need to create wearables such as rings, pendants, bracelets, bags, fitness trackers and even buttons. It has been created with always-on applications in mind, so will be suitable for devices that relay notifications or constantly track a wearer’s activity.
Intel started down this road with its stamp-sized 22nm Edison SoC and the Curie module shrinks it down even further. The module uses Bluetooth LE and has a built-in accelerometer and gyroscope to track movements and recognize gestures. It can run either off a rechargeable battery or a more traditional coin-like watch battery, though Intel doesn’t say for exactly how long. Curie basically turns just about anything into a gadget that’s at least as smart as your average fitness tracker. Rings, buttons, glasses, watches, whathave you.
Microsoft, the famous multinational corporation that powers much of the world’s computing infrastructure with its Windows OS, now accepts Bitcoin online for its digital products. Microsoft users, in America only for now, can even fund their Microsoft/Windows Live or XBox Live accounts using Bitcoin, at the current exchange rate. Once Microsoft’s Bitcoin acceptance is at 100%, businesses and end users alike can pay for services such as Azure or even Windows Phone apps.
Once Microsoft starts accepting Bitcoin worldwide, the frictionless transactions will truly begin to flow. With $86.63 b in annual revenue and $172.38 b in total assets, Microsoft is the new largest company in the world to accept Bitcoin. Microsoft’s founder Bill Gates has previously admitted that “Bitcoin Technology is key.”
Microsoft has started to accept Bitcoin payments, allowing customers to buy apps, games and videos from online stores with the crypto-currency.
The new feature, which was quietly added to its website last night, has been enabled by a partnership with payment processing startup BitPay.
Customers can now use Bitcoin to add money to their Microsoft account, but not use it to pay for goods directly. Once you add money to your Microsoft account you can use it to buy apps, games and other digital content from the Windows, Windows Phone, Xbox Games, Xbox Music and Xbox Video stores.
“Most Bitcoin transactions should process immediately. If it doesn’t, please wait up to two hours for the transaction to complete before contacting support,” says a support page on Bitcoin on Microsoft’s website. “Money added to your Microsoft account using Bitcoin cannot be refunded, so make sure to review your transaction before paying with your digital wallet.”
Currently the feature is limited to the US, and those customers here in the UK are not able to add funds to their account with a Bitcoin payment.
The Intel Edison module uses a 22-nm Intel® Atom™ SoC, formerly Silvermont that includes a dual core, dual threaded CPU at 500 MHz and a 32-bit Intel® Quark™ processor MCU at 100 MHz. It supports 40 GPIOs and includes: 1 GB LPDDR3, 4 GB EMMC, and dual-band WiFi and Bluetooth® Low Energy on a module the size of a postage stamp.
The Intel Edison module will initially support development with Arduino* and C/C++, followed by Node.JS, Python, RTOS, and Visual Programming support in the near future.
The Intel Edison module includes a device-to-device and device-to-cloud connectivity framework to enable cross-device communication and a cloud-based, multi-tenant, time-series analytics service.
“While we’re focused on giving people more seamless buying experiences, we’re also fierce advocates of giving merchants — and in turn their customers — flexibility and the freedom of choice. That’s why today, we also announced that we’ll enable our customers to easily accept bitcoin in the coming months via a partnership with Coinbase — a trusted, high quality bitcoin payment processor with 1.6M consumer wallets and 36,000 merchants globally. As we make bitcoin available, our v.zero SDK will make it seamless for developers and merchants to add bitcoin to their existing payment methods and provide an elegant, adaptive user interface for consumers to pay in bitcoin with their Coinbase wallet (request access to the upcoming beta).
At Braintree, we’ve been an open platform from the beginning, striving to give merchants easy access to the most sophisticated payment tools, which now includes the most relevant wallets across platforms — all via a single integration, the Braintree v.zero SDK. Today, this includes PayPal, with over 150M active digital wallets, Venmo, which is processing more than $1.5B in mobile payment volume annually, Coinbase in the coming months, and whatever is relevant for developers and merchants in the future.”
Advanced neuro-technologies including wireless EEG and robotized TMS enable first successful transmission
BOSTON –In a first-of-its-kind study, an international team of neuroscientists and robotics engineers have demonstrated the viability of direct brain-to-brain communication in humans. Recently published in PLOS ONE the highly novel findings describe the successful transmission of information via the internet between the intact scalps of two human subjects – located 5,000 miles apart.
In the neuroscientific equivalent of instant messaging, Pascual-Leone, together with Giulio Ruffini and Carles Grau leading a team of researchers from Starlab Barcelona, Spain, and Michel Berg, leading a team from Axilum Robotics, Strasbourg, France, successfully transmitted the words “hola” and “ciao” in a computer-mediated brain-to-brain transmission from a location in India to a location in France using internet-linked electroencephalogram (EEG) and robot-assisted and image-guided transcranial magnetic stimulation (TMS) technologies.
Using EEG, the research team first translated the greetings “hola” and “ciao” into binary code and then emailed the results from India to France. There a computer-brain interface transmitted the message to the receiver’s brain through noninvasive brain stimulation. The subjects experienced this as phosphenes, flashes of light in their peripheral vision. The light appeared in numerical sequences that enabled the receiver to decode the information in the message, and while the subjects did not report feeling anything, they did correctly receive the greetings.
A second similar experiment was conducted between individuals in Spain and France, with the end result a total error rate of just 15 percent, 11 percent on the decoding end and five percent on the initial coding side.
“By using advanced precision neuro-technologies including wireless EEG and robotized TMS, we were able to directly and noninvasively transmit a thought from one person to another, without them having to speak or write,” says Pascual-Leone. “This in itself is a remarkable step in human communication, but being able to do so across a distance of thousands of miles is a critically important proof-of-principle for the development of brain-to-brain communications. We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or motor-based communication.”
Some really cool technology in the making: take a helmet cam, go running, climbing, whatever action you’d like to film, and this software will reconstruct a smooth time lapse based on some clever algorithms!
“We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyperlapse videos: time-lapse videos with a smoothly moving camera.
At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up.
Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video (shown in red) that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input.
Next, we compute geometric proxies for each input frame. These allow us to render the frames from the novel viewpoints on the optimized path.
Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame. We present a number of results for challenging videos that cannot be processed using traditional techniques.
And this is HOW it works:
A Russian crime ring has amassed the largest known collection of stolen Internet credentials, including 1.2 billion user name and password combinations and more than 500 million email addresses, security researchers say.
The records, discovered by Hold Security, a firm in Milwaukee, include confidential material gathered from 420,000 websites, including household names, and small Internet sites. Hold Security has a history of uncovering significant hacks, including the theft last year of tens of millions of records from Adobe Systems.
Hold Security would not name the victims, citing nondisclosure agreements and a reluctance to name companies whose sites remained vulnerable. At the request of The New York Times, a security expert not affiliated with Hold Security analyzed the database of stolen credentials and confirmed it was authentic. Another computer crime expert who had reviewed the data, but was not allowed to discuss it publicly, said some big companies were aware that their records were among the stolen information.