• 12m

    以太坊完成「柏林」硬分叉!ETH 攻破 2500 美元後回落

    Blockcast「區塊客」
  • 12m

    Bitcoin Slumps Below Previous ATH: ETH Reached $2550 (Market Watch)

    CryptoPotato
  • 14m

    Five Crypto Platforms Will Increasingly Grab Market Share From Ethereum, Says Trader Lark Davis

    The Daily Hodl
  • 17m

    ‘Naturally decentralized’ island nations like Tuvalu are perfect for blockchain ledgers, says forum

    Coin Geek
  • 18m

    ⇑ UK hedge fund reportedly plans to invest $84M in crypto

    Hodl Hodl News
  • 19m

    Правительство Макао внесет изменения в законы для внедрения цифровой валюты НБК

    bits.media
  • 20m

    XTZ is really undervalued

    CryptoCurrencyTrading
  • 29m

    什么是永续合约(2/2)

    CN CryptoNews
  • 33m

    Maker (MKR) Token Hits a New High as MakerDAO's First Financing Proposal Gets Approved

    Blockchain News
  • 40m

    什么是永续合约 (1/2)

    CN CryptoNews
  • 42m

    Dogecoin Marketcap Surpasses Southwest Airlines And Rumors Arise Robinhood’s Restricting Dogecoin Trading

    TheCryptoBasic
  • 47m

    Coinbase直接上市,位列金融APP热榜,成公众新焦点

    CN CryptoNews
  • 47m

    25 и 26 мая пройдут саммиты AIBC и AGS по блокчейну при поддержке правительства Дубая

    bits.media
  • 47m

    #BNB [twitter.com] [video.twimg.com]

    Binance
  • 55m

    Who has the best burn meme? May need one in a few hours. #BNB

    CZ Binance
  • 1h

    Turkey Bans Crypto Payments

    CryptoNews
  • 1h

    15 апреля стартовал онлайн-хакатон по разработке проектов DeFi и NFT на Binance Smart Chain

    bits.media
  • 1h

    China coal mine accidents may be behind bitcoin’s hash rate drop

    The Block
  • 1h

    Robinhood Restores Crypto Trading After Dogecoin Volumes Cause ‘Major Outage’

    be[in]crypto
  • 1h

    Shopify executive changes not enough to undermine success, analyst says

    Currency Times
  • 1h

    Robinhood Faces Technical Issues, Dogecoin Jumps 100%

    Finance Magnates
  • 1h

    Chinese mining pools’ hash power plummets amid regional blackouts

    Cointelegraph
  • 1h

    Blockchain is hard for developers and everyday users. Is it getting any easier?

    Cointelegraph
  • 1h

    Today's @binance page 😀 #TRX #BTT [twitter.com] [pbs.twimg.com]

    Justin Sun Tron
  • 1h

    TA: Ethereum Corrects Rally, But 100 SMA Could Spark Fresh Increase

    newsbtс
  • 2h

    What made NMX (+2,600% in March) one of the top DeFi tokens of 2021

    Cryptopolitan
  • 2h

    Feels like #BTT will be the next #Doge! #BitTorrent

    Justin Sun Tron
  • 2h

    Brian Brooks defends fintech charter to House Financial Services Committee

    Cointelegraph
  • 2h

    The stablecoin flippening is happening! [coindesk.com]

    Justin Sun Tron
  • 2h

    On-chain voting launches on Gorilla DAO

    Coin Geek
  • 2h

    European Hedge Fund Firm Plans to Buy $84 Million Worth of Cryptocurrencies

    Blockchain News
  • 2h

    Turkish central bank to ban crypto payments for goods and services by end of April

    Hodl Hodl News
  • 2h

    We just executed a buyback of 52,460 $CAKE worth $1.16M. [bscscan.com] 1⃣9,585 CAKE-BNB LPs paid in IFO participation fees 2⃣Decomposed to 52,628 CAKE & 2,276 BNB 3⃣Used the BNB to market buy CAKE 🤝 An additional 105k CAKE to be burned on Monday. [twitter.com] [video.twimg.com]

    Pancake Swap
  • 2h

    #Binance Adds New Trading Pairs: 🔸 $BNB/ $UAH 🔸 $ONT/ $TRY @OntologyNetwork 🔸 $VET/ $EUR @vechainofficial 🔸 $VET/ $GBP 🔸 $WIN/ $BRL @WINkorg777 [binance.com]

    Binance
  • 2h

    Top Upcoming Crypto Events (04/16 – 04/22)

    Altcoin Buzz
  • 2h

    Ethereum Breaks USD 2,500, DOGE Doubles while Bitcoin Consolidates

    CryptoNews
  • 2h

    Liquity Protocol attracts $1B TVL in just 10 days

    Cointelegraph
  • 2h

    Ripple Tapped by Novatti to Improve Australian Remittances to Southeast Asia

    Blockchain News
  • 3h

    保費也能用比特幣支付!AXA 安盛開創瑞士保險業先例

    Blockcast「區塊客」
  • 3h

    Ethereum price cracks $2,500 for the first time in history, shifts attention to $3,000

    CoinGape
  • 3h

    Ethereum finally breaches $2500 to hit yet another ATH

    AMBCrypto
  • 3h

    Ethereum Berlin upgrade is now live—how will it affect ETH prices?

    CryptoSlate
  • 3h

    創投家 Garry Tan 細說早期投資 Coinbase「爆賺 6 千倍」始末

    Blockcast「區塊客」
  • 3h

    First Charity Project Using NFT: All Profits From Sales of Content Produced by Miss Bitcoin To Be Donated

    The Daily Hodl
  • 3h

    Bitcoin price uptrend to $65,000 decelerates amid potential bearish comeback

    CoinGape
  • 06 April
  • 1w

    RT/ A robot that senses hidden objects

    Paradigm Fund
Tue, Apr 6, 2021 8:30 PM by Paradigm Fund

RT/ A robot that senses hidden objects

preview img

Robotics biweekly vol.28, 25th March — 6th AprilTL;DRMIT researchers developed a picking robot that combines vision with radio frequency (RF) sensing to find and grasps objects, even if they’re hidden from view. The technology could aid fulfilment in e-commerce warehouses.Scientists create the next version of Xenobots — tiny biological robots that self-assemble, carry out tasks, and can repair themselves. Now they can move faster, and record information.A team of researchers has developed a new early warning system for vehicles that uses artificial intelligence to learn from thousands of real traffic situations. The results show that, if used in today’s self-driving vehicles, it can warn seven seconds in advance against potentially critical situations that the cars cannot handle alone — with over 85% accuracy.This ‘metal-eating’ robot can follow a metal path without using a computer or needing a battery. By wiring the power-supplying units to the wheels on the opposite side, the robot autonomously navigates towards aluminum surfaces and away from hazards that block its energy source.Engineers have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across the water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.Living organisms, from bacteria to animals and humans, can perceive their environment and process, store and retrieve this information. They learn how to react to later situations using appropriate actions. A team of physicists have developed a method for giving tiny artificial microswimmers a certain ability to learn using machine learning algorithms.Researchers developed a deep learning neural network to aid the design of soft-bodied robots. The algorithm optimizes the arrangement of sensors on the robot, enabling it to complete tasks as efficiently as possible.Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists have trained an AI agent — an autonomous computational program that observes and acts — how to conduct research experiments at superhuman levels by using the same approach.Researchers have developed a system that uses wireless radio signals and artificial intelligence to detect errors in patients’ use of inhalers and insulin pens. The technology could reduce unnecessary hospital admissions caused by poor adherence to certain medication administration guidelines.This spring 2021 GRASP SFI comes from Monroe Kennedy III at Stanford University, on “Considerations for Human-Robot Collaboration.”In the second session of HAI’s spring conference, artists and technologists discussed how technology can enhance creativity, reimagine meaning, and support racial and social justice. The conference, called “Intelligence Augmentation: AI Empowering People to Solve Global Challenges,” took place on 25 March 2021.Festo’s Bionic Learning Network for 2021 presents a flock of BionicSwifts.Check out robotics upcoming events. And more!Robotics marketThe global market for robots is expected to grow at a compound annual growth rate (CAGR) of around 26 percent to reach just under 210 billion U.S. dollars by 2025. It is predicted that this market will hit the 100 billion U.S. dollar mark in 2020.Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars):Size of the global market for industrial and non-industrial robots between 2018 and 2025(in billion U.S. dollars). Source: StatistaLatest News & ResearchesRobotic Grasping of Fully-Occluded Objects using RF Perceptionby Massachusetts Institute of Technology researchersIn recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.<a href="https://medium.com/media/809f83920f7262d368fbe815a5e2d699/href">https://medium.com/media/809f83920f7262d368fbe815a5e2d699/href</a>As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.But radio waves can.For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.“RF is such a different sensing modality than vision,” says Rodriguez. “It would be a mistake not to explore what RF can do.”RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced.“The robot has to decide, at each point in time, which of these streams is more important to think about,” says Boroushaki. “It’s not just eye-hand coordination, it’s RF-eye-hand coordination. So, the problem gets very complicated.”The robot initiates the seek-and-pluck process by pinging the target object’s RF tag for a sense of its whereabouts. “It starts by using RF to focus the attention of vision,” says Adib. “Then you use vision to navigate fine maneuvers.” The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren’s source.With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot’s decision making.RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to “declutter” its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp’s “unfair advantage” over robots without penetrative RF sensing. “It has this guidance that other systems simply don’t have.”RF Grasp could one day perform fulfillment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item’s identity without the need to manipulate the item, expose its barcode, then scan it. “RF has the potential to improve some of those limitations in industry, especially in perception and localization,” says Rodriguez.Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. “Or you could imagine the robot finding lost items. It’s like a super-Roomba that goes and retrieves my keys, wherever the heck I put them.”A cellular platform for the development of synthetic living machinesby Douglas Blackiston, Emma Lederer, Sam Kriegman, Simon Garnier, Joshua Bongard, Michael Levin in Science RoboticsLast year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.Get ready for Xenobots 2.0.The same team has now created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory. The new generation Xenobots also move faster, navigate different environments, and have longer lifespans than the first edition, and they still have the ability to work together in groups and heal themselves if damaged.Compared to Xenobots 1.0, in which the millimeter-sized automatons were constructed in a “top down” approach by manual placement of tissue and surgical shaping of frog skin and cardiac cells to produce motion, the next version of Xenobots takes a “bottom up” approach. The biologists at Tufts took stem cells from embryos of the African frog Xenopus laevis (hence the name “Xenobots”) and allowed them to self-assemble and grow into spheroids, where some of the cells after a few days differentiated to produce cilia — tiny hair-like projections that move back and forth or rotate in a specific way. Instead of using manually sculpted cardiac cells whose natural rhythmic contractions allowed the original Xenobots to scuttle around, cilia give the new spheroidal bots “legs” to move them rapidly across a surface. In a frog, or human for that matter, cilia would normally be found on mucous surfaces, like in the lungs, to help push out pathogens and other foreign material. On the Xenobots, they are repurposed to provide rapid locomotion.“We are witnessing the remarkable plasticity of cellular collectives, which build a rudimentary new ‘body’ that is quite distinct from their default — in this case, a frog — despite having a completely normal genome,” said Michael Levin, Distinguished Professor of Biology and director of the Allen Discovery Center at Tufts University, and corresponding author of the study. “In a frog embryo, cells cooperate to create a tadpole. Here, removed from that context, we see that cells can re-purpose their genetically encoded hardware, like cilia, for new functions such as locomotion. It is amazing that cells can spontaneously take on new roles and create new body plans and behaviors without long periods of evolutionary selection for those features.”“In a way, the Xenobots are constructed much like a traditional robot. Only we use cells and tissues rather than artificial components to build the shape and create predictable behavior.” said senior scientist Doug Blackiston, who co-first authored the study with research technician Emma Lederer. “On the biology end, this approach is helping us understand how cells communicate as they interact with one another during development, and how we might better control those interactions.”While the Tufts scientists created the physical organisms, scientists at UVM were busy running computer simulations that modeled different shapes of the Xenobots to see if they might exhibit different behaviors, both individually and in groups. Using the Deep Green supercomputer cluster at UVM’s Vermont Advanced Computing Core, the team, led by computer scientists and robotics experts Josh Bongard and under hundreds of thousands of random environmental conditions using an evolutionary algorithm. These simulations were used to identify Xenobots most able to work together in swarms to gather large piles of debris in a field of particles.“We know the task, but it’s not at all obvious — for people — what a successful design should look like. That’s where the supercomputer comes in and searches over the space of all possible Xenobot swarms to find the swarm that does the job best,” says Bongard. “We want Xenobots to do useful work. Right now we’re giving them simple tasks, but ultimately we’re aiming for a new kind of living tool that could, for example, clean up microplastics in the ocean or contaminants in soil.”It turns out, the new Xenobots are much faster and better at tasks such as garbage collection than last year’s model, working together in a swarm to sweep through a petri dish and gather larger piles of iron oxide particles. They can also cover large flat surfaces, or travel through narrow capillaries. These studies also suggest that the in silico simulations could in the future optimize additional features of biological bots for more complex behaviors. One important feature added in the Xenobot upgrade is the ability to record information.Now with memoryA central feature of robotics is the ability to record memory and use that information to modify the robot’s actions and behavior. With that in mind, the Tufts scientists engineered the Xenobots with a read/write capability to record one bit of information, using a fluorescent reporter protein called EosFP, which normally glows green. However, when exposed to light at 390nm wavelength, the protein emits red light instead.The cells of the frog embryos were injected with messenger RNA coding for the EosFP protein before stem cells were excised to create the Xenobots. The mature Xenobots now have a built-in fluorescent switch which can record exposure to blue light around 390nm.The researchers tested the memory function by allowing 10 Xenobots to swim around a surface on which one spot is illuminated with a beam of 390nm light. After two hours, they found that three bots emitted red light. The rest remained their original green, effectively recording the “travel experience” of the bots.This proof of principle of molecular memory could be extended in the future to detect and record not only light but also the presence of radioactive contamination, chemical pollutants, drugs, or a disease condition. Further engineering of the memory function could enable the recording of multiple stimuli (more bits of information) or allow the bots to release compounds or change behavior upon sensation of stimuli.“When we bring in more capabilities to the bots, we can use the computer simulations to design them with more complex behaviors and the ability to carry out more elaborate tasks,” said Bongard. “We could potentially design them not only to report conditions in their environment but also to modify and repair conditions in their environment.”Xenobot, heal thyself“The biological materials we are using have many features we would like to someday implement in the bots — cells can act like sensors, motors for movement, communication and computation networks, and recording devices to store information,” said Levin. “One thing the Xenobots and future versions of biological bots can do that their metal and plastic counterparts have difficulty doing is constructing their own body plan as the cells grow and mature, and then repairing and restoring themselves if they become damaged. Healing is a natural feature of living organisms, and it is preserved in Xenobot biology.”The new Xenobots were remarkably adept at healing and would close the majority of a severe full-length laceration half their thickness within 5 minutes of the injury. All injured bots were able to ultimately heal the wound, restore their shape and continue their work as before.Another advantage of a biological robot, Levin adds, is metabolism. Unlike metal and plastic robots, the cells in a biological robot can absorb and break down chemicals and work like tiny factories synthesizing and excreting chemicals and proteins. The whole field of synthetic biology — which has largely focused on reprogramming single celled organisms to produce useful molecules — can now be exploited in these multicellular creatures.Like the original Xenobots, the upgraded bots can survive up to ten days on their embryonic energy stores and run their tasks without additional energy sources, but they can also carry on at full speed for many months if kept in a “soup” of nutrients.What the scientists are really afterAn engaging description of the biological bots and what we can learn from them is presented in a TED talk by Michael Levin.In his TED Talk, professor Levin describes not only the remarkable potential for tiny biological robots to carry out useful tasks in the environment or potentially in therapeutic applications, but he also points out what may be the most valuable benefit of this research — using the bots to understand how individual cells come together, communicate, and specialize to create a larger organism, as they do in nature to create a frog or human. It’s a new model system that can provide a foundation for regenerative medicine.Xenobots and their successors may also provide insight into how multicellular organisms arose from ancient single celled organisms, and the origins of information processing, decision making and cognition in biological organisms.Introspective Failure Prediction for Autonomous Driving Using Late Fusion of State and Camera Informationby Christopher B. Kuhn, Markus Hofbauer, Goran Petrovic, Eckehard Steinbach in IEEE Transactions on Intelligent Transportation SystemsA team of researchers at the Technical University of Munich (TUM) has developed a new early warning system for vehicles that uses artificial intelligence to learn from thousands of real traffic situations. A study of the system was carried out in cooperation with the BMW Group. The results show that, if used in today’s self-driving vehicles, it can warn seven seconds in advance against potentially critical situations that the cars cannot handle alone — with over 85% accuracy.To make self-driving cars safe in the future, development efforts often rely on sophisticated models aimed at giving cars the ability to analyze the behavior of all traffic participants. But what happens if the models are not yet capable of handling some complex or unforeseen situations?A team working with Prof. Eckehard Steinbach, who holds the Chair of Media Technology and is a member of the Board of Directors of the Munich School of Robotics and Machine Intelligence (MSRM) at TUM, is taking a new approach. Thanks to artificial intelligence (AI), their system can learn from past situations where self-driving test vehicles were pushed to their limits in real-world road traffic. Those are situations where a human driver takes over — either because the car signals the need for intervention or because the driver decides to intervene for safety reasons.Pattern recognition through RNNThe technology uses sensors and cameras to capture surrounding conditions and records status data for the vehicle such as the steering wheel angle, road conditions, weather, visibility and speed. The AI system, based on a recurrent neural network (RNN), learns to recognize patterns with the data. If the system spots a pattern in a new driving situation that the control system was unable to handle in the past, the driver will be warned in advance of a possible critical situation.“To make vehicles more autonomous, many existing methods study what the cars now understand about traffic and then try to improve the models used by them. The big advantage of our technology: we completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns,” says Steinbach. “In this way, the AI discovers potentially critical situations that models may not be capable of recognizing, or have yet to discover. Our system therefore offers a safety function that knows when and where the cars have weaknesses.”Warnings up to seven seconds in advanceThe team of researchers tested the technology with the BMW Group and its autonomous development vehicles on public roads and analyzed around 2500 situations where the driver had to intervene. The study showed that the AI is already capable of predicting potentially critical situations with better than 85 percent accuracy — up to seven seconds before they occur.Collecting data with no extra effortFor the technology to function, large quantities of data are needed. After all, the AI can only recognize and predict experiences at the limits of the system if the situations were seen before. With the large number of development vehicles on the road, the data was practically generated by itself, says Christopher Kuhn, one of the authors of the study: “Every time a potentially critical situation comes up on a test drive, we end up with a new training example.” The central storage of the data makes it possible for every vehicle to learn from all of the data recorded across the entire fleet.Gaming the beamlines — employing reinforcement learning to maximize scientific outcomes at large-scale user facilitiesby Phillip M Maffettone, Joshua K Lynch, Thomas A Caswell, Clara E Cook, Stuart I Campbell, Daniel Olds in Machine Learning: Science and TechnologyInspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent — an autonomous computational program that observes and acts — how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team implemented the AI agent as part of the research capabilities at NSLS-II.As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations — called beamlines — is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day — despite the facility’s 24/7 operations.“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”“This is why we taught an AI agent to conduct scientific experiments as if they were video games. This allows a robot to run the experiment, while we — humans — are not there. It enables round-the-clock, fully remote, hands-off experimentation with roughly twice the efficiency that humans can achieve,” added Phillip Maffettone, research associate at NSLS-II and first author on the study.According to the researchers, they didn’t even have to give the AI agent the rules of the ‘game’ to run the experiment. Instead, the team used a method called “reinforcement learning” to train an AI agent on how to run a successful scientific experiment, and then tested their agent on simulated research data from the Pair Distribution Function beamline at NSLS-II.Beamline Experiments: A Boss Level ChallengeReinforcement learning is one strategy of training an AI agent to master an ability. The idea of reinforcement learning is that the AI agent perceives an environment — a world — and can influence it by performing actions. Depending on how the AI agent interacts with the world, it may receive a reward or a penalty, reflecting if this specific interaction is a good choice or a poor one. The trick is that the AI agent retains the memory of its interactions with the world, so that it can learn from the experience for when it tries again. In this way, the AI agent figures out how to master a task by collecting the most rewards.“Reinforcement learning really lends itself to teaching AI agents how to play video games. It is most successful with games that have a simple concept — like collecting as many coins as possible — but also have hidden layers, like secret tunnels containing more coins. Beamline experiments follow a similar idea: the basic concept is simple, but there are hidden secrets we want to uncover. Basically, for an AI agent to run our beamline, we needed to turn our beamline into a video game,” said Olds.Maffettone added, “The comparison to a video game works well for the beamline. In both cases, the AI agent acts in a world with clear rules. In the world of Super Mario, the AI agent can choose to move Mario up, down, left, right; while at the beamline, the actions would be the motions of the sample or the detector and deciding when to take data. The real challenge is to simulate the environment correctly — a video game like Super Mario is already a simulated world and you can just let the AI agent play it a million times to learn it. So, for us, the question was how can we simulate a beamline in such a way that the AI agent can play a million experiments without actually running them” said Maffettone.The team “gamified” the beamline by building a virtual version of it that simulated the measurements the real beamline can do. They used millions of data sets that the AI agent could gather while “playing” to run experiments on the virtual beamline.“Training these AIs is very different than most of the programming we do at beamlines. You aren’t telling the agents explicitly what to do, but you are trying to figure out a reward structure that gets them to behave the way you want. It’s a bit like teaching a kid how to play video games for the first time. You don’t want to tell them every move they should make, you want them to begin inferring the strategies themselves.” Olds said.Once the beamline was simulated and the AI agent learned how to conduct research experiments using the virtual beamline, it was time to test the AI’s capability of dealing with many unknown samples.“The most common experiments at our beamline involve everything from one to hundreds of samples that are often variations of the same material or similar materials — but we don’t know enough about the samples to understand how we can measure them the best way. So, as humans, we would need to go through them all, one by one, take a snapshot measurement and then, based on that work, come up with a good strategy. Now, we just let the pre-trained AI agent work it out,” said Olds.In their simulated research scenarios, the AI agent was able to measure unknown samples with up to twice the efficiency of humans under strongly constrained circumstances, such as limited measurement time.“We didn’t have to program in a scientist’s logic of how to run an experiment, it figured these strategies out by itself through repetitive playing.” Olds said.Materials Discovery: Loading New GameWith the AI agent ready for action, it was time for the team to figure out how it could run a real experiment by moving the actual components of the beamline. For this challenge, the scientists teamed up with NSLS-II’s Data Acquisition, Management, and Analysis Group to create the backend infrastructure. They developed a program called Bluesky-adaptive, which acts as a generic interface between AI tools and Bluesky — the software suite that runs all of NSLS-II’s beamlines. This interface laid the necessary groundwork to use similar AI tools at any of the other 28 beamlines at NSLS-II.“Our agent can now not only be used for one type of sample, or one type of measurement — it’s very adaptable. We are able to adjust it or extend it as needed. Now that the pipeline exists, it would take me 45 minutes talking to the person and 15 minutes at my keyboard to adjust the agent to their needs,” Maffettone said.The team expects to run the first real experiments using the AI agent this spring and is actively collaborating with other beamlines at NSLS-II to make the tool accessible for other measurements.“Using our instruments’ time more efficiently is like running an engine more efficiently — we are making more discoveries per year happen. We hope that our new tool will enable a new transformative approach to increase our output as user facility with the same resources.”Schematic overview of the Bluesky system. The components in the left column are responsible for orchestration of the beamline for data acquisition. As the experiment progresses, the data is published to the consumers which may serialize it to disk for later access via DataBroker, display to the screen, or preform some prompt analysis. In this paper, researchers are primarily interested in the loop at the top where they feed back from prompt analysis to the currently running plan via Bluesky-adaptive.Computer‐Free Autonomous Navigation and Power Generation Using Electro‐Chemotaxisby Min Wang, Yue Gao, James H. Pikul in Advanced Intelligent SystemsWhen it comes to powering mobile robots, batteries present a problematic paradox: the more energy they contain, the more they weigh, and thus the more energy the robot needs to move. Energy harvesters, like solar panels, might work for some applications, but they don’t deliver power quickly or consistently enough for sustained travel.James Pikul, assistant professor in Penn Engineering’s Department of Mechanical Engineering and Applied Mechanics, is developing robot-powering technology that has the best of both worlds. His environmentally controlled voltage source, or ECVS, works like a battery, in that the energy is produced by repeatedly breaking and forming chemical bonds, but it escapes the weight paradox by finding those chemical bonds in the robot’s environment, like a harvester. While in contact with a metal surface, an ECVS unit catalyzes an oxidation reaction with the surrounding air, powering the robot with the freed electrons.Pikul’s approach was inspired by how animals power themselves through foraging for chemical bonds in the form of food. And like a simple organism, these ECVS-powered robots are now capable of searching for their own food sources despite lacking a “brain.”In a new study published as an Editor’s Choice article in Advanced Intelligent Systems, Pikul, along with lab members Min Wang and Yue Gao, demonstrate a wheeled robot that can navigate its environment without a computer. By having the left and right wheels of the robot powered by different ECVS units, they show a rudimentary form of navigation and foraging, where the robot will automatically steer toward metallic surfaces it can “eat.”Their study also outlines more complicated behavior that can be achieved without a central processor. With different spatial and sequential arrangements of ECVS units, a robot can perform a variety of logical operations based on the presence or absence of its food source.“Bacteria are able to autonomously navigate toward nutrients through a process called chemotaxis, where they sense and respond to changes in chemical concentrations,” Pikul says. “Small robots have similar constraints to microorganisms, since they can’t carry big batteries or complicated computers, so we wanted to explore how our ECVS technology could replicate that kind of behavior.”In the researchers’ experiments, they placed their robot on aluminum surfaces capable of powering its ECVS units. By adding “hazards” that would prevent the robot from making contact with the metal, they showed how ECVS units could both get the robot moving and navigate it toward more energy-rich sources.“In some ways,” Pikul says, “they are like a tongue in that they both sense and help digest energy.”One type of hazard was a curving path of insulating tape. The researchers showed that the robot would autonomously follow the metal lane in between two lines of tape if its EVCS units were wired to the wheels on the opposite side. If the lane curved to the left, for example, the ECVS on the right side of the robot would begin to lose power first, slowing the robot’s left wheels and causing it to turn away from the hazard.Another hazard took the form of a viscous insulating gel, which the robot could gradually wipe away by driving over it. Since the thickness of the gel was directly related to the amount of power the robot’s ECVS units could draw from the metal underneath it, the researchers were able to show that the robot’s turning radius was responsive to that sort of environmental signal.By understanding the types of cues ECVS units can pick up, the researchers can devise different ways of incorporating them into the design of a robot in order to achieve the desired type of navigation.“Wiring the ECVS units to opposite motors allows the robot to avoid the surfaces they don’t like,” says Pikul. “But when the ECVS units are in parallel to both motors, they operate like an ‘OR’ gate, in that they ignore chemical or physical changes that occur under just one power source.”“We can use this sort of wiring to match biological preferences,” he says. “It’s important to be able to tell the difference between environments that are dangerous and need to be avoided, and ones that are just inconvenient and can be passed through if necessary.”As ECVS technology evolves, they can be used to program even more complicated and responsive behaviors in autonomous, computerless robots. By matching the ECVS design to the environment that a robot needs to operate in, Pikul envisions tiny robots that crawl through rubble or other hazardous environments, getting sensors to critical locations while preserving themselves.“If we have different ECVS that are tuned to different chemistries, we can have robots that avoid surfaces that are dangerous, but power through ones that stand in the way of an objective,” Pikul says.Autonomous navigation. A) An illustration of an ant that avoids hazards and follows food to gain energy. The photo is adapted with permission from “Jerdon’s jumping ant with prey” by Vipin Baliga, licensed under CC BY‐NC‐SA 2.0. B) A schematic of a synthetic analog consisting of a vehicle that navigates along a metal fuel source while avoiding hazards. C) Sequential images of a reactive agent vehicle following a metal fuel path without computers.Microengineered Materials with Self‐Healing Features for Soft Roboticsby Vardhman Kumar, Ung Hyun Ko, Yilong Zhou, Jiaul Hoque, Gaurav Arya, Shyni Varghese in Advanced Intelligent SystemsEngineers at Duke University have developed an electronics-free, entirely soft robot shaped like a dragonfly that can skim across water and react to environmental conditions such as pH, temperature or the presence of oil. The proof-of-principle demonstration could be the precursor to more advanced, autonomous, long-range environmental sentinels for monitoring a wide range of potential telltale signs of problems.Soft robots are a growing trend in the industry due to their versatility. Soft parts can handle delicate objects such as biological tissues that metal or ceramic components would damage. Soft bodies can help robots float or squeeze into tight spaces where rigid frames would get stuck.The expanding field was on the mind of Shyni Varghese, professor of biomedical engineering, mechanical engineering and materials science, and orthopaedic surgery at Duke, when inspiration struck.“I got an email from Shyni from the airport saying she had an idea for a soft robot that uses a self-healing hydrogel that her group has invented in the past to react and move autonomously,” said Vardhman Kumar, a PhD student in Varghese’s laboratory and first author of the paper. “But that was the extent of the email, and I didn’t hear from her again for days. So the idea sort of sat in limbo for a little while until I had enough free time to pursue it, and Shyni said to go for it.”In 2012, Varghese and her laboratory created a self-healing hydrogel that reacts to changes in pH in a matter of seconds. Whether it be a crack in the hydrogel or two adjoining pieces “painted” with it, a change in acidity causes the hydrogel to form new bonds, which are completely reversible when the pH returns to its original levels.Varghese’s hastily written idea was to find a way to use this hydrogel on a soft robot that could travel across water and indicate places where the pH changes. Along with a few other innovations to signal changes in its surroundings, she figured her lab could design such a robot as a sort of autonomous environmental sensor.With the help of Ung Hyun Ko, a postdoctoral fellow also in Varghese’s laboratory, Kumar began designing a soft robot based on a fly. After several iterations, the pair settled on the shape of a dragonfly engineered with a network of interior microchannels that allow it to be controlled with air pressure.They created the body — about 2.25 inches long with a 1.4-inch wingspan — by pouring silicon into an aluminum mold and baking it. The team used soft lithography to create interior channels and connected with flexible silicon tubing.DraBot was born.“Getting DraBot to respond to air pressure controls over long distances using only self-actuators without any electronics was difficult,” said Ko. “That was definitely the most challenging part.”DraBot works by controlling the air pressure coming into its wings. Microchannels carry the air into the front wings, where it escapes through a series of holes pointed directly into the back wings. If both back wings are down, the airflow is blocked, and DraBot goes nowhere. But if both wings are up, DraBot goes forward.To add an element of control, the team also designed balloon actuators under each of the back wings close to DraBot’s body. When inflated, the balloons cause the wings to curl upward. By changing which wings are up or down, the researchers tell DraBot where to go.“We were happy when we were able to control DraBot, but it’s based on living things,” said Kumar. “And living things don’t just move around on their own, they react to their environment.”That’s where self-healing hydrogel comes in. By painting one set of wings with the hydrogel, the researchers were able to make DraBot responsive to changes in the surrounding water’s pH. If the water becomes acidic, one side’s front wing fuses with the back wing. Instead of traveling in a straight line as instructed, the imbalance causes the robot to spin in a circle. Once the pH returns to a normal level, the hydrogel “un-heals,” the fused wings separate, and DraBot once again becomes fully responsive to commands.To beef up its environmental awareness, the researchers also leveraged the sponges under the wings and doped the wings with temperature-responsive materials. When DraBot skims over water with oil floating on the surface, the sponges will soak it up and change color to the corresponding color of oil. And when the water becomes overly warm, DraBot’s wings change from red to yellow.The researchers believe these types of measurements could play an important part in an environmental robotic sensor in the future. Responsiveness to pH can detect freshwater acidification, which is a serious environmental problem affecting several geologically-sensitive regions. The ability to soak up oils makes such long-distance skimming robots an ideal candidate for early detection of oil spills. Changing colors due to temperatures could help spot signs of red tide and the bleaching of coral reefs, which leads to decline in the population of aquatic life.The team also sees many ways that they could improve on their proof-of-concept. Wireless cameras or solid-state sensors could enhance the capabilities of DraBot. And creating a form of onboard propellant would help similar bots break free of their tubing.“Instead of using air pressure to control the wings, I could envision using some sort of synthetic biology that generates energy,” said Varghese. “That’s a totally different field than I work in, so we’ll have to have a conversation with some potential collaborators to see what’s possible. But that’s part of the fun of working on an interdisciplinary project like this.”<a href="https://medium.com/media/726ce3c2282fb67593e88a144c858585/href">https://medium.com/media/726ce3c2282fb67593e88a144c858585/href</a>Reinforcement learning with artificial microswimmersby S. Muiños-Landin, A. Fischer, V. Holubec, F. Cichos in Science RoboticsMicroswimmers are artificial, self-propelled, microscopic particles. They are capable of directional motion in a solution. The Molecular Nanophotonics Group at Leipzig University has developed special particles that are smaller than one-thirtieth of the diameter of a hair. They can change their direction of motion by heating tiny gold particles on their surface and converting this energy into motion. “However, these miniaturised machines cannot take in and learn information like their living counterparts. To achieve this, we control the microswimmers externally so that they learn to navigate in a virtual environment through what is known as reinforcement learning,” said Cichos.With the help of virtual rewards, the microswimmers find their way through the liquid while repeatedly being thrown off of their path, mainly by Brownian motion. “Our results show that the best swimmer is not the one that is fastest, but rather that there is an optimal speed,” said Viktor Holubec, who worked on the project as a fellow of the Alexander von Humboldt Foundation and has now returned to the university in Prague.According to the scientists, linking artificial intelligence and active systems like in these microswimmers is a first small step towards new intelligent microscopic materials that can autonomously perform tasks while also adapting to their new environment. At the same time, they hope that the combination of artificial microswimmers and machine learning methods will provide new insights into the emergence of collective behaviour in biological systems. “Our goal is to develop artificial, smart building blocks that can perceive their environmental influences and actively react to them,” said the physicist. Once this method is fully developed and has been applied to other material systems, including biological ones, it could be used, for example, in the development of smart drugs or microscopic robot swarms.Gold nanoparticle–decorated microswimmer, states, and actions.(A) Sketch of the self-thermophoretic symmetric microswimmer. The particles used have an average radius of r = 1.09 μm and were covered on 30% of their surface with gold nanoparticles of about 10 nm diameter. A heating laser illuminates the colloid asymmetrically (at a distance d from the center), and the swimmer acquires a well-defined thermophoretic velocity v. (B) The gridworld contains 25 inner states (blue) with one goal at the top right corner (green). A set of 24 boundary states (red) is defined for the study of the noise influence. (C )In each of the states, researchers consider eight possible actions in which the particle is thermophoretically propelled along the indicated directions by positioning the laser focus accordingly. (D) The RL loop starts with measuring the position of the active particle and determining the state. For this state, a specific action is determined with the ϵ greedy procedure (see the Supplementary Materials for details). Afterward, a transition is made, the new state is determined, and a reward for the transition is given. On the basis of this reward, the Q-matrix is updated, and the procedure starts from step 1 until an episode ends by reaching the goal or exiting the gridworld to a boundary state.Co-Learning of Task and Sensor Placement for Soft Roboticsby Andrew Spielberg, Alexander Amini, Lillian Chin, Wojciech Matusik, Daniela Rus in IEEE Robotics and Automation LettersThere are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.Creating soft robots that complete real-world tasks has been a long-running challenge in robotics. Their rigid counterparts have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. “The main problem with soft robots is that they are infinitely dimensional,” says Spielberg. “Any point on a soft-bodied robot can, in theory, deform in any way possible.” That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So, the question is: How many sensors do you have, and where do you put those sensors in order to get the most bang for your buck?” The team turned to deep learning for an answer.The researchers developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, the researchers divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often, and it culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.The researchers pitted their algorithm against a series of expert predictions. For three different soft robot layouts, the team asked roboticists to manually select where sensors should be placed to enable the efficient completion of tasks like grasping various objects. Then they ran simulations comparing the human-sensorized robots to the algorithm-sensorized robots. And the results weren’t close. “Our model vastly outperformed humans for each task, even though I looked at some of the robot bodies and felt very confident on where the sensors should go,” says Amini. “It turns out there are a lot more subtleties in this problem than we initially expected.”Spielberg says their work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, “we also need to think about how we’re going to sensorize these robots, and how that will interplay with other components of that system,” he says. And better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping. “That’s something where you need a very robust, well-optimized sense of touch,” says Spielberg. “So, there’s potential for immediate impact.”“Automating the design of sensorized soft robots is an important step toward rapidly creating intelligent tools that help people with physical tasks,” says Rus. “The sensors are an important aspect of the process, as they enable the soft robot to “see” and understand the world and its relationship with the world.”(a) The foundation of researchers’ models is a Particle Sparsifying Feature Extractor (PSFE) which takes as input the full, dense sensory information (left) and extracts a global feature representation (right) from a sparse subset of the inputs. The model simultaneously learns this representation and sparsification of the input. Since the input is an unordered point cloud, the PSFE also maintains order invariance through shared feature and point transformations as well as global pooling operations. Researchers employ the PSFE on various complex tasks: (b) Supervised regression and classification of object characteristics from grasp data. ( c )Learned proprioception by combining PSFE with a variational decoder network. (d) Learned control policies for a soft robot.Assessment of medication self-administration using artificial intelligenceby Mingmin Zhao, Kreshnik Hoti, Hao Wang, Aniruddh Raghu, Dina Katabi in Nature MedicineFrom swallowing pills to injecting insulin, patients frequently administer their own medication. But they don’t always get it right. Improper adherence to doctors’ orders is commonplace, accounting for thousands of deaths and billions of dollars in medical costs annually. MIT researchers have developed a system to reduce those numbers for some types of medications.The new technology pairs wireless sensing with artificial intelligence to determine when a patient is using an insulin pen or inhaler, and flags potential errors in the patient’s administration method. “Some past work reports that up to 70% of patients do not take their insulin as prescribed, and many patients do not use inhalers properly,” says Dina Katabi, the Andrew and Erna Viteri Professor at MIT, whose research group has developed the new solution. The researchers say the system, which can be installed in a home, could alert patients and caregivers to medication errors and potentially reduce unnecessary hospital visits.The study’s lead authors are Mingmin Zhao, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and Kreshnik Hoti, a former visiting scientist at MIT and current faculty member at the University of Prishtina in Kosovo. Other co-authors include Hao Wang, a former CSAIL postdoc and current faculty member at Rutgers University, Aniruddh Raghu, a CSAIL PhD student.Some common drugs entail intricate delivery mechanisms. “For example, insulin pens require priming to make sure there are no air bubbles inside. And after injection, you have to hold for 10 seconds,” says Zhao. “All those little steps are necessary to properly deliver the drug to its active site.” Each step also presents opportunity for errors, especially when there’s no pharmacist present to offer corrective tips. Patients might not even realize when they make a mistake — so Zhao’s team designed an automated system that can.Their system can be broken down into three broad steps. First, a sensor tracks a patient’s movements within a 10-meter radius, using radio waves that reflect off their body. Next, artificial intelligence scours the reflected signals for signs of a patient self-administering an inhaler or insulin pen. Finally, the system alerts the patient or their health care provider when it detects an error in the patient’s self-administration.The researchers adapted their sensing method from a wireless technology they’d previously used to monitor people’s sleeping positions. It starts with a wall-mounted device that emits very low-power radio waves. When someone moves, they modulate the signal and reflect it back to the device’s sensor. Each unique movement yields a corresponding pattern of modulated radio waves that the device can decode. “One nice thing about this system is that it doesn’t require the patient to wear any sensors,” says Zhao. “It can even work through occlusions, similar to how you can access your Wi-Fi when you’re in a different room from your router.”The new sensor sits in the background at home, like a Wi-Fi router, and uses artificial intelligence to interpret the modulated radio waves. The team developed a neural network to key in on patterns indicating the use of an inhaler or insulin pen. They trained the network to learn those patterns by performing example movements, some relevant (e.g. using an inhaler) and some not (e.g. eating). Through repetition and reinforcement, the network successfully detected 96 percent of insulin pen administrations and 99 percent of inhaler uses.Once it mastered the art of detection, the network also proved useful for correction. Every proper medicine administration follows a similar sequence — picking up the insulin pen, priming it, injecting, etc. So, the system can flag anomalies in any particular step. For example, the network can recognize if a patient holds down their insulin pen for five seconds instead of the prescribed 10 seconds. The system can then relay that information to the patient or directly to their doctor, so they can fix their technique.“By breaking it down into these steps, we can not only see how frequently the patient is using their device, but also assess their administration technique to see how well they’re doing,” says Zhao.The researchers say a key feature of their radio wave-based system is its noninvasiveness. “An alternative way to solve this problem is by installing cameras,” says Zhao. “But using a wireless signal is much less intrusive. It doesn’t show peoples’ appearance.”He adds that their framework could be adapted to medications beyond inhalers and insulin pens — all it would take is retraining the neural network to recognize the appropriate sequence of movements. Zhao says that “with this type of sensing technology at home, we could detect issues early on, so the person can see a doctor before the problem is exacerbated.”a, The wireless sensor is mounted on the wall, analyzing the surrounding radio signals using AI. The AI solution would detect when the person started to use an inhaler. b–d, The AI solution also tracks the motion during the MSA event and detects that the person shook the device, exhaled before use and, finally, inhaled a dose.VideosFesto’s Bionic Learning Network for 2021 presents a flock of BionicSwifts:<a href="https://medium.com/media/c2c7af142c96c38c414ddadc4580ba55/href">https://medium.com/media/c2c7af142c96c38c414ddadc4580ba55/href</a>The legendary Zenta is back after a two year YouTube hiatus with “a kind of freaky furry hexapod bunny creature.”<a href="https://medium.com/media/ed306d081e2b9748eb3e45ef14970bcc/href">https://medium.com/media/ed306d081e2b9748eb3e45ef14970bcc/href</a>SoftBank may not have Spot cheerleading robots for their baseball team anymore, but they’ve more than made up for it with a full century of Peppers. And one dude doing the robot.<a href="https://medium.com/media/fd529b79cb1e189f27ad6caa7ccb5746/href">https://medium.com/media/fd529b79cb1e189f27ad6caa7ccb5746/href</a>This spring 2021 GRASP SFI comes from Monroe Kennedy III at Stanford University, on “Considerations for Human-Robot Collaboration.”<a href="https://medium.com/media/b78074d014c9c6472bb850a5aaab99be/href">https://medium.com/media/b78074d014c9c6472bb850a5aaab99be/href</a>In the second session of HAI’s spring conference, artists and technologists discussed how technology can enhance creativity, reimagine meaning, and support racial and social justice. The conference, called “Intelligence Augmentation: AI Empowering People to Solve Global Challenges,” took place on 25 March 2021.<a href="https://medium.com/media/7846545067a720f28886e22e34402db9/href">https://medium.com/media/7846545067a720f28886e22e34402db9/href</a>Upcoming eventsRoboSoft 2021 — April 12–16, 2021 — [Online Conference]ICRA 2021 — May 30–5, 2021 — Xi’an, ChinaDARPA SubT Finals — September 21–23, 2021 — Louisville, KY, USAWeRobot 2021 — September 23–25, 2021 — Coral Gables, FL, USAMISC — @SciRoboticsSubscribe to Paradigm!Medium. Twitter. Telegram. Telegram Chat. Reddit. LinkedIn.Main sourcesResearch articlesScience RoboticsScience DailyIEEE SpectrumRT/ A robot that senses hidden objects was originally published in Paradigm on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read full