Browsing Category

Robots

img.jpg
Robots,

This Toy-Stealing Jerk Robot Will Teach Other Bots How to Hold Things

Photo: Carnegie Mellon University

When picking an object up, it takes humans a mere instant to know if they’ve grasped it properly, or if they need to adjust their grip so it’s more secure. Teaching robots how to properly pick something is a monumental task that might actually get a little easier-
by making it harder to do
.

The easiest way to teach a robot how to pick something is to simply let it learn to pick up object after object by itself, trying different techniques and approaches each time to successfully move it from one location to another. After thousands of hours of this repetitive task, the software powering a robot can eventually learn how to reliably pick something up-but that doesn’t necessarily mean the grip it’s using is solid. Teaching robots to pick things up securely is important will help reduce the risk of something getting dropped, which could be both expensive and dangerous in a factory setting.

Advertisement

But how does a robot know when it has a secure grip on something? To help teach our future overlords the proper way to grasp objects, researchers at Carnegie Mellon University are using a novel approach: as one robot attempts to pick something up, a second robot is working to snatch the same object away.



While one robot is tasked with picking up an object, its evil twin is programmed to try and grab it. If it can easily do so, more often than not, it means the way the first robot grasped the object wasn’t properly secure or stable. Through repetition, both robots are trying to get better and better at the job they’ve been programmed to do, but ultimately the approach will help teach robots the difference between a stable and unstable grasp on an object.

Photo: Carnegie Mellon University

The research, presented at the 2017 International Conference on Robotics and Automation (ICRA) last week, also demonstrates how single robots can be programmed to make the grasping challenge even harder. Once an item has been picked up, the robot can shake it vigorously to test if its grip was secure. Using these “adversarial” techniques, as they’re called, can not only improve a robot’s ability to grasp items, it can also accelerate the entire learning process, which saves time and money. Most importantly, though, this research ensures the robots of the future will end up with a solid, confident handshake.

Sponsored

[
YouTube
via
IEEE Spectrum
]

img.jpg
Robots,

The Pentagon’s Silicon Valley Outpost Is Bringing “Robotic Wingmen” to the Battlefield

Photo: Getty

DIUx
is an initiative by the Department of Defense that has set up in Silicon Valley to incubate special projects and it’s starting to roll out some fully formed concepts. The latest prototype the program has produced would allow Maverick to fly with a robotic Goose, and it’s totally okay if this wingman dies.

The Defense Innovation Unit Experimental (DUIx)
was founded
by former Secretary of Defense Ash Carter as a sort of liaison between the DoD and innovators who wouldn’t want to work in the military environment. The red tape-laden process of working within the government is the opposite of the move fast and break things mindset that Silicon Valley adores. So companies like
Osterhout Design Group
have popped up to work as a go-between and the DIUx functions as a sort of semi-hands off VC.

Advertisement

All-in-all, the program doesn’t seem like it’s been going so well. Carter has announced the beginning of
DUIx 2.0
, which isn’t usually a reassuring phrase, and Elon Musk recently
had some meetings
with the Pentagon to start something secret. The branch needs a win and the robotic wingmen are being pushed out in a show of results.

The program itself may not be going smoothly but according to
Defense News
, it’s been good for the companies that DIUx invests in. $1.5 billion in private cash has flowed to companies that first received the Pentagon’s money.
Kratos Defense and Security Solutions
is handling the drone wingmen project. According to the

Washington Post

:

On Tuesday, Kratos Defense and Security Solutions officially announced two new classes of drones designed to function as robotic wingmen for fighter pilots. Development of the UTAP-22 Mako has been funded by the Defense Department’s Silicon Valley laboratory, dubbed DIUx. Separately, the company showed off a larger, 30-foot-long drone backed by the Air Force called the XQ-222 Valkyrie, with a range of more than 4,000 nautical miles.

Aviation experts say the speed and altitude capacities published by Kratos suggest the drones could fly in tandem with an F-16 or F-35 fighter. The company says it has already successfully flown the drones alongside manned aircraft and that it will soon embark on an advanced round of testing above California’s Mojave Desert employing a more sophisticated array of sensing technology to determine just how autonomous the drones can be.

The testing is set for July and a “demonstrated military exercise” is scheduled for sometime in the second half of 2017. That’s pretty darn fast for a military project.
Usually
, when we hear about these experimental designs, they’ve been in development for many years, haven’t changed much, and have years of development ahead of them.

Sponsored

According to Kratos’ president, Steven Fenley, “These systems can conduct fully autonomous missions.” The idea is for them to fly alongside a manned aircraft and be able to independently perform maneuvers. For now, the sensors are directing to drone to mimic the piloted aircraft’s movements but in the future, they may be flying ahead or independently drawing enemy fire.

Kratos contract with DIUx is for only $12.6 million. We don’t know how much outside funding has also come in, but if the project is a success it would certainly demonstrate that the program’s approach with small private-sector investment is working. I wouldn’t hold your breath.

Advertisement

[
Washington Post
]

img.jpg
Robots,

How Have I Lived My Whole Life Without an Extra Pair of Robot Arms?

GIF

GIF:
YouTube

If you’re jealous of Tony Stark’s Iron Man suit, but don’t have billions of dollars to build your own, a group of Japanese researchers have come up with a cheaper, and arguably more useful alternative: an
extra pair of robot arms
that can help out when your own limbs are busy.

Developed at the
Inami Hiyama Laboratory
at the University of Tokyo, along with researchers from
Keio University
, the MetaLimbs will officially be unveiled at the upcoming
Siggraph 2017 conference
, although
this video gives us our first look
at how the arms work.



Instead of
using mind control
, or attempting to give the arms some level of autonomy and intelligence to know what the wearer wants them to do, the extra limbs mimic the movements of the user’s legs. Motion tracking gear attached to their feet and knees directly translates the wearer’s leg motions to the arms, giving the user precise control over their new helper limbs.

The approach does introduce some complications when it comes to using the arms while walking: you simply can’t. And you can forget using them to rise to the top of the UFC fighting leagues.

Advertisement

Advertisement

This is far from the first attempt to create an artificial human limb, but previous attempts have focused on replacing arms or legs that have been lost in combat, or because of medical reasons. In comparison, the MetaLimbs are somewhat of a superfluous creation, worn mostly for convenience. But the research will almost surely benefit those developing replacement limbs as well, given the novel approach to how these arms can be precisely controlled.

There’s no word on how far these researchers plan to take their brilliant creation, but the next time you’re forced to work through your lunch hour, just imagine how wonderful an extra pair of arms would be, letting you enjoy a sandwich and drink without taking your real hands off your keyboard.

[
YouTube
via
Prosthetic Knowledge
]

img.jpg
Robots,

This is What an ISIS Drone Workshop Looks Like

A drone and a mobile IED at an Islamic State factory discovered by Iraqi forces today on June 23, 2017 in the frontline neighborhood of Al-Shifa, on the edge of the Islamic State occupied Old City of west Mosul (Photo by Martyn Aim/Getty Images)

The Islamic State has increasingly used drones and other robotic IEDs against American, Iraqi, and civilian targets
in Iraq
. And as the Coalition fights its way through Mosul, troops are discovering workshops filled with crude but deadly robotics used to bomb people sometimes dozens of
times per day
.

Getty Images just published photos of an ISIS factory that’s churning out robotic death machines, including aerial drones and four-wheeled robotic bombs. The photos give us a look at the new ways in which ISIS robots are being churned out to spread death and destruction.

While ISIS has sometimes been retrofitting hobby drones with explosives, they’ve also been building drones from scratch using metal pipes, wooden propellors, and repurposed small engines.

(Photo by Martyn Aim/Getty Images)

But it’s not just aerial unmanned systems being built by the ISIS forces. Rolling improvised explosive devices have also been discovered in ISIS workshops recently.

(Photo by Martyn Aim/Getty Images)

The primitive robots, like the one seen in the upper lefthand corner in the photo below, are similar to the first robotic bombs used in World War II by both American and Nazi forces.

(Photo by Martyn Aim/Getty Images)

As you can see in the photo below from 1942, robotic warfare really is nothing new.

The German Goliath, pictured above, was about 5 feet long and 1.5 feet tall and carried 132 pounds of explosives. It had a cable that was almost a mile long and would advanced on Allied troops.

Advertisement

Advertisement

The ISIS engineers appear to be repurposing any engines they can get their hands on, including some from motorcycles found in the latest descent on ISIS strongholds in Mosul.

(Photo by Martyn Aim/Getty Images)

The offensive against ISIS fighters in the battle for Mosul has reportedly killed at least one drone maker who was working for ISIS, though that could not be independently confirmed by Gadgetlayout.

“An IS secret rest house, used for launching drones, at the outskirts of Tal Afar, west of Mosul, was heavily shelled early on Tuesday, leaving the member in charge of the drones, called Abu Hafsa, and some companions, killed,” an anonymous source told
AlSumaria News
.

(Photo by Martyn Aim/Getty Images)

But American and Iraqi forces continue to push through Mosul and have captured the Great Mosque of al-Nuri,
where ISIS was first formed
. While it’s a tremendous symbolic victory, there’s still a lot of work to do before the ISIS drone and robot makers are put out of work for good.

img.jpg
Robots,

Roomba CEO Swears That He Will Never Sell Maps of Users’ Homes, So Help Him God

Photo: AP

iRobot, the maker of Roomba, made big news this week when
an interview with its CEO
mentioned plans to
sell the map data
of customers’ homes to third parties. Today, the company launched damage control measures and the CEO is spreading assurances that this is all just a big misunderstanding.

In a
statement first shared with ZDNet
, iRobot CEO Colin Angle wrote:

First things first, iRobot will
never
sell your data. Our mission is to help you keep a cleaner home and, in time, to help the smart home and the devices in it work better.

Pledging to never sell your customer’s data is great. There are
tons of issues
that could arise from that and, let’s face it, imagining faceless corporations knowing all the details of your inner sanctum is a really uncomfortable thought. But taking someone’s word for it is never a good idea, and Angle’s statement raised more questions for us.

Advertisement

Advertisement

We reached out to a spokesperson for iRobot, who tells Gadgetlayout that
Reuters’ original article
about iRobot contained “an unintentional misinterpretation of Colin’s statements.” In fact, Reuters issued a correction today. The paragraph that set off a firestorm has now replaced the words “sell maps” with “share maps for free with customer consent.” It reads in full:

Angle told Reuters that iRobot, which made Roomba compatible with Amazon’s Alexa voice assistant in March, could reach a deal to share its maps for free with customer consent to one or more of the Big Three in the next couple of years. Angle added the company could extract value from those agreements by connecting for free with as many companies as possible to make the device more useful in the home.

So we know that Reuters admits to the misunderstanding, but iRobot is still saying that it’s considering sharing all that map data, just that they won’t sell it for cash. And a great way to guarantee “iRobot will
never
sell your data” would be to include those exact words in Roomba’s privacy policy. But iRobot wouldn’t commit to that. “There will be language in our privacy policy to address this concern,” a company spokesperson told us.

Advertisement

We asked if iRobot currently shares all of the map data with the Amazon Echo if it is connected. Here’s what they told us:

iRobot is not sharing mapping data with any third parties, including Amazon. Amazon does receive partial data from iRobot if a customer chooses to link their Roomba to Alexa, which is limited to the commands required to control the robot via voice control, such as starting a cleaning job, stopping a cleaning job, etc.

Regarding whether or not iRobot would make it a permanent policy to never share its
full
mapping data with smart devices, the spokesperson told us “We cannot commit on policy details of hypothetical future use cases or features.”

Advertisement

Advertisement

Unfortunately, hypothetical future use cases are exactly what we’re talking about. We’ve attempted to get more information about exactly what data is being stored by iRobot but company reps have avoided specificity.

The company did tell us that the Roomba’s onboard camera is “physically separated from any wireless or wired transmission,” and “the only data that is sent from the robot to the network-with customer consent-is information about cleaning jobs and lifetime cleaning statistics.” Of course, just about anything a camera-enabled vacuum records while patrolling your home for dust bunnies could be considered information “about cleaning jobs.”

The company would not share a complete list of data points it collects, but it did inform us that the map you see on your phone app is not the map that they see. “The map that the Roomba creates during a cleaning job is sent to the cloud where it is processed and simplified to produce a user-friendly map that ultimately appears in the iRobot HOME App,” the representative told us.

Advertisement

The
terms of service
that users agree to are, thus far, unchanged. One troubling section says, among other things, iRobot may share your personal information with “other parties in connection with any company transaction” and “sale of all or a portion of company assets or shares.” We asked if this section would be amended and were told twice that “this language is in the event a company ever purchased iRobot.” That’s certainly true for many parts of that section, but the two highlighted clauses appear to leave open the option to sell off company assets (like valuable data) in “any company transaction” (like maybe a transaction in which it sells your data). And, oh yeah, some unknown company could buy iRobot.

What we’ve learned is that one guy said something about never selling your data, but mostly it’s on you to decide whether or not you did your best to protect yourself. Some kind of change is coming to the terms of service, maybe. And iRobot doesn’t want to tell you what data it has. But headlines will blare that the company has reversed its position. Customer outrage may have caused headaches at the company this week, but it seems like investors noticed the new potential of iRobot. Stock prices started out at $89.49 this week and at the moment are sitting comfortably in the $107 range. Big data is big money, baby.

Advertisement

[
ZDNet
]

img.jpg
Robots,

Would You Feel Safer If Your Self-Driving Car Could Explain Itself?

Image: Hot Tub Time Machine 2

With each passing breakthrough in artificial intelligence, we’re asking our machines to make increasingly complex and weighty decisions. Trouble is, AIs are starting to act beyond our levels of comprehension. In high frequency stock trading, for example, this had led to so-called
flash crashes
, in which algorithms make lightning-quick decisions for reasons we can’t quite grasp. In an effort to bridge the growing gap between man and machine, the Pentagon is launching a new program to create machines that can explain their actions in a way we puny humans can understand.

The Defense Advanced Research Projects Agency (DARPA) is
giving $6.5 million to eight computer science professors
at Oregon State University’s College of Engineering. The Pentagon’s advanced concepts research wing is hoping these experts can devise a new system or platform that keeps humans within the conceptual loop of AI decision-making, allowing us to weigh in on those decisions as they’re being made. The idea is to make intelligence-based systems, such as self-driving vehicles and autonomous aerial drones, more trustworthy. Importantly, the same technology could also result in safer AI.

Part of the problem of humans not understanding AI decision-making stems from how AI works today. Instead of being programmed for specific behaviors, many of today’s smartest robots operate by learning on their own from many examples, a process called machine learning. Unfortunately, this often leads to solutions that the system’s developers don’t even understand-think
computers making chess moves
that baffle even the game’s top grandmasters. At the same time, the system cannot provide any sort of feedback explaining itself.

Advertisement

Accordingly, we’re becoming increasingly wary of machines that have to make important decisions. In
a recent study
, most participants agreed that autonomous vehicles should be programmed to make difficult ethical decisions, such as killing the car’s occupant instead of ten pedestrians in the absence of any other options. Trouble is, the same respondents said they wouldn’t want to ride in such a car. Seems we want our intelligent machines to act as ethically and socially responsible as possible, so long as we’re not the ones being harmed.

Perhaps it would help us to trust our machines more if we could peer under the hood and see how AIs reach their decisions. If we’re not happy with what we see, or how an AI reached a decision, we could simply pull the plug, or choose not to purchase a certain car. Alternately, programmers and computer scientists could provide the AI with new data, or different sets of rules, to help the machine come up with more palatable decisions.

Under the new DARPA four-year grant, researchers will work to develop a platform that facilitates communication between humans and AI to serve this very purpose.

Sponsored

“Ultimately, we want these explanations to be very natural-translating these deep network decisions into sentences and visualizations,” said Alan Fern, principal investigator for the grant and associate director of the College of Engineering’s Collaborative Robotics and Intelligent Systems Institute.

During the first stage of this multi-disciplinary effort, researchers will use real-time strategy games, like StarCraft, to train AI “players” that will have to explain their decisions to humans. Later, the researchers will adapt these findings to robotics and autonomous aerial vehicles.

This research may become crucial not just for improving trust between humans and self-driving cars, but any kind of autonomous machine-including those with even greater responsibilities. Eventually,
artificially intelligent war machines may be required to kill enemy combatants
. At that stage, we will most certainly need to know why machines are acting in a particular way. Looking even further ahead, we may one day need to peer into the mind of an AI vastly beyond human intelligence. This won’t be easy; such a machine will be able to calculate thousands of decisions in a split second. It may not be possible for us to understand everything our future AI do, but by thinking about the problem now, we have a better shot at constraining future robots’ actions.

Advertisement

Advertisement

[
Oregon State University
]

img.jpg
Robots,

This robot uses AI to write and play own music

In a first, researchers have developed a robot that can write and play its own music compositions using artificial intelligence and deep learning.

The robot — named Shimon — with four arms and eight sticks can play harmonies and chords on marimba. It also thinks much more like a human musician, focussing less on the next note, and more on the overall structure of the composition.

The researchers from Georgia Institute of Technology in the US fed the robot with nearly 5,000 complete songs — from Beethoven to the Beatles to Lady Gaga to Miles Davis — and more than two million motifs, riffs and licks of music.

Aside from giving the machine a seed, or the first four measures to use as a starting point, no humans were involved in either the composition or the performance of the music.

“Once Shimon learns the four measures we provide, it creates its own sequence of concepts and composes its own piece,” Mason Bretan, doctoral student at the Georgia Institute of Technology, said in a statement.

“Shimon’s compositions represent how music sounds and looks when a robot uses deep neural networks to learn everything it knows about music from millions of human-made segments,” he added.

As long as the researchers feed it a different seed, the robot produces something different each time — music that the researchers cannot predict.

In the first piece, Bretan fed Shimon a melody comprised of eighth notes. It received a sixteenth note melody the second time, which influenced it to generate faster note sequences.

This leap in Shimon’s musical quality is because it is using deep learning, which is enabling it to create a more structured and coherent composition, the researchers said.

Shimon’s debut as a solo composer was featured in a 30-minute video clip in the Consumer Electronic Show (CES) keynote and will have its first live performance at the Aspen Ideas Festival at the end of June, the researchers said.

img.jpg
Robots,

Robot Journalist Accidentally Reports on Earthquake from 1925

File photo of damage from an earthquake in Napa in 2014 (Photo by Justin Sullivan/Getty Images)

Yesterday, the
Los Angeles Times
reported on a 6.8 earthquake that struck Santa Barbara at 4:51pm. Which might be surprising to the people of Santa Barbara who didn’t feel anything. The big problem with the story? The earthquake happened in 1925.

How could reporters get something so wrong? Well, the “reporter” who wrote yesterday’s news article about the 6.8 quake was actually a robot. The
L.A. Times
deleted its automated tweet as well as the automatically published article and explained what happened in a subsequent tweet:

The newspaper’s algorithm, called Quakebot, scrapes data from the US Geological Survey’s website. A USGS staffer at Caltech mistakenly sent out the alert when updating historical earthquake data to make it more precise.

Advertisement

Seismologists have reportedly complained about some of the historical data being off by as much as 6 miles, and this staffer was simply updating the location of the old quake from 1925. But it shows how quickly misinformation can spread with just a few clicks.

An earthquake registering 6.8 is a big deal, so people were pretty relieved to see that it was a false alarm. The 1925 earthquake killed 13 people and caused over $8 million in damage. With so many more people living in the area today it would no doubt be much more deadly.

Advertisement

The Los Angeles Times has employed Quakebot
since 2014
and has reported on hundreds of earthquakes, big and small, over the years. But this is the first known major screw up since it was first put online. And it certainly won’t be the last as journalism on everything from homicides to baseball scores becomes more automated.

Quakebot could not be reached for comment by press time.

[
Los Angeles Times
]

img.jpg
Robots,

This Floating Robotic Camera Is the Cutest Thing Ever Sent Into Space

Image: NASA/JAXA

Astronauts aboard the International Space Station have a new crew member-an adorable robotic ball capable of recording video while moving in zero gravity. Dubbed “Int-Ball,” the device will free astronauts to do more important work, while providing ground controllers with their own set of eyes.

Int-Ball is short for Internal Ball Camera, and it was developed by Japan’s Aerospace Exploration Agency (JAXA). The unit was delivered to the ISS aboard a SpaceX Dragon spacecraft on June 4th, and it’s currently going through initial testing. The camera, which can move autonomously or be guided by controllers on the ground, is the first drone capable of recording still images and video while moving in space, according to JAXA.



The device, which measures nearly six inches in diameter (15 cm), will allow mission controllers to closely monitor conditions inside the space station, freeing the crew to focus on more important tasks, such as conducting experiments and making repairs. According to JAXA, ISS astronauts currently spend around 10 percent of their working hours taking photos and video.

Int-Ball (which is an awful name for something so cute) is currently active in Japan’s “Kibo” experiment module aboard the ISS. Flight controllers and researchers at JAXA’s Tsukuba Space Center can monitor the images taken by the device in real time, and feed them back to the onboard crew.



Many of the components used to to manufacture Int-Ball were produced by 3D-printing, and its design was adopted from pre-existing drone technology. The device can move virtually anywhere inside the module, and record images from any angle. Controllers on the ground can thus use Int-Ball to see things from a crew member’s perspective, which could help when overseeing complicated work.

Advertisement

The floating bot is equipped with
a three-axis control unit
, which it uses to trigger the 12 fans located along its surface. This allows it to move or orient itself in zero gravity. A series of pink “3D Target Markers” have been attached to the module’s walls, allowing Int-Ball’s navigation camera to establish reference points and enable its autonomous mode.

This bot is a great idea, and surely a sign of things to come. Space is
a dangerous and unhealthy place for humans
, so the more work that robots can do, the better.

Advertisement

[
JAXA
via
Engadget
]

img.jpg
Robots,

This Printer Doodles Stick Figure Robots to Explore Areas We Can’t

GIF

Building a robot that can replicate everything a human can do is both impossibly complicated and expensive. So, researchers at the
IT University of Copenhagen
are taking the exact opposite approach: building incredibly simple robots, on-demand, that only do what humans can’t.

In a paper recently published to the
 
IEEE Robotics and Automation Letters
journal, a team from the IT University of Copenhagen’s
Robotics, Evolution and Art Lab
-or, REAL, for short-led by associate professor Sebastian Risi, have developed what they’re calling a “1D printer” that can create simple wire-based robots designed to accomplish a very specific task.



The printer works in a manner similar to industrial wire-bending machines, extruding a length of thin metal which is bent in multiple places to create shapes that can be used as limbs or tools. The printer also automatically attaches electric motors to the wire structure as it’s being extruded, which bring the robots to life once they’re complete.

Due to the simplicity of their structure and components, the robots are limited in what they can do. They won’t be bringing you a glass of ice tea on demand, but the scientists see them being useful for exploring areas inaccessible to humans. One example sees a simple wire robot crawling limb-over-limb down a pipe, but the bot could be outfitted with a wireless camera for relaying video of a hard-to-reach area, or a sensor array for providing details about whether or not it’s safe for humans to proceed.

Advertisement

Advertisement

What makes this 1D printing technology unique, however, is that the robots don’t need to be physically designed by a human. The team has also developed an algorithm that can autonomously design a robot for a given task when fed its specific requirements and constraints. For instance, a rescue worker might need a robot that can walk on loose rubble, but also squeeze through a hole just a few inches in size.

The algorithm can also learn from design mistakes, when a robot it created has failed. It will simply design a new-and-improved model, again and again, until the requested task is accomplished. It sounds wasteful and time-consuming, but the robots take only about a quarter-of-an-hour to produce, and can be easily recycled by simply straightening the wire again.

The self-engineering aspect of these robots could make them useful when it comes to space exploration, and research on other worlds. When NASA designs a rover to explore a planet like Mars, it tries to engineer it to tackle countless types of terrain. But if these rovers had a 1D printer on board, they could produce additional probes that were able to further explore parts of Mars the rover wasn’t originally designed to handle.

Advertisement

[
IEEE Xplore
via
New Scientist
]