Boulder Future Salon Recent News Bits

Thumbnail "In the game The Witness, players solve puzzles by tracing patterns that begin with circles, and continue in continuous line to and endpoint point. At first these patterns appear only on panels, but players eventually realize that the entire island is filled with these patterns, and the real goal is to recognize these patterns in the surrounding environment. And after finishing, many players, myself included, began seeing these patterns in the real world as well."

"The Witness puzzle patterns in panels, in the environment, and in the real world! I trained a deep learning model to identify and label those puzzle patterns in The Witness screenshots."

"First, I went through the game and took several screenshots of each environmental puzzle: about 300 in total. Thanks to IGN for this comprehensive guide to the locations of all the puzzles, and SaveGameWorld for a save file so I didn't have to actually gain access to all the locations again! I made sure to capture screenshots from different angles, plenty of positive examples with the puzzle fully visible, and negative examples where parts are obscured or interrupted."
Thumbnail "Gradient descent not fast enough? Tired of managing memory and juggling template parameters to interface with your favorite nonlinear solver in C++?"

"OpTorch lets you write your cost functions as PyTorch modules and seamlessly optimize them in ceres, Google's industrial strength solver."
Thumbnail "In my opinion, PyTorch's automatic differentiation engine, called Autograd is a brilliant tool to understand how automatic differentiation works."

"All mathematical operations in PyTorch are implemented by the torch.nn.Autograd.Function class. This class has two important member functions we need to look at."

"The first is it's forward function, which simply computes the output using it's inputs. The backward function takes the incoming gradient coming from the the part of the network in front of it. As you can see, the gradient to be backpropagated from a function f is basically the gradient that is backpropagated to f from the layers in front of it multiplied by the local gradient of the output of f with respect to it's inputs. This is exactly what the backward function does."
Thumbnail TF.Text is a TensorFlow 2.0 library for making a lot of the grunt work with text processing (called "preprocessing" in machine learning parlance) easier.
Thumbnail Transfer RNA (tRNA) fragments increase in the blood in the hours before a seizure.

Talk about something I never would've expected. What's the connection between tRNA and seizures? Messenger RNA (mRNA) is what carries the genetic information from the DNA in the cell nucleus to the ribosomes. tRNA is used to connect the genetic code in the mRNA to the actual amino acids.
Thumbnail "The frequent use of words associated with sound preceeds diagnosis of psychosis. This was found using "a new machine-learning method to more precisely quantify the semantic richness of people's conversational language."

They trained their system on Reddit to establish baselines for normal conversation, which seems a bit questionable?
Thumbnail SpaceX's StarLink satellite internet, if it works, should have faster (lower latency) communication speed than ground-based internet.
Thumbnail A carbon-neutral system for producing fuel from solar energy. "ETH researchers have developed a solar plant to produce synthetic liquid fuels that release as much CO2 during their combustion as previously extracted from the air for their production. CO2 and water are extracted directly from ambient air and split using solar energy. This process yields syngas, a mixture of hydrogen and carbon monoxide, which is subsequently processed into kerosene, methanol or other hydrocarbons. These drop-in fuels are ready for use in the existing global transport infrastructure."
Thumbnail "Imagine seeing an origami crane for the first time and trying to reverse-engineer the folds used to make it, when you've been given nothing but the unfolded piece of paper covered in hints written as parables from a foreign language. Now imagine that this reverse engineering problem meant folding the crane out of 1D strings instead of 2D sheets, and that it was in fact a real bird. This should give you an idea of the difficulty of protein structure prediction. Consider a short polypeptide chain where the links between each of 100 amino acids can adopt just three values of two angles. Spending 1 nanosecond per each possible conformation of the resulting peptide structure would take longer than the age of the universe by several orders of magnitude."

"At last year's Critical Assessment of protein Structure Prediction competition (CASP13), researchers from DeepMind made headlines by taking the top position in the free modeling category by a considerable margin, essentially doubling the rate of progress in CASP predictions of recent competitions. This is impressive, and a surprising result in the same vein as if a molecular biology lab with no previous involvement in deep learning were to solidly trounce experienced practitioners at modern machine learning benchmarks."

"Coming from DeepMind, we might expect a massive end-to-end deep learning model for protein structure prediction, but we'd be wrong." "The DeepMind team tried a 'fancier' strategy involving fragment assembly using Generative Adversarial Networks (GANs), but in the end the best results were obtained by gradient descent optimization. Gradient descent was applied to a combination of scores from their deep learning model as well as molecular modeling software Rosetta."
Thumbnail thisemotiondoesnotexist. So this website has a bunch of sliders on the left side with emotions, "happy", "sad", "surprised", etc, and when you change them it changes a line drawing of a face, and you can hit a 'play' button at the bottom to animate it.

"We asked people to view and rate video clips of emotional facial expressions. From these data we built a statistical model that captures the relationship between facial expressions and their emotional perception."
Thumbnail "While supervised learning has tremendously improved AI performance in image classification, a major drawback is its reliance on large-scale labeled datasets. This has prompted researchers to explore the potential of unsupervised learning and semi-supervised learning  --  techniques that forego data annotation but have their own drawback: diminished accuracy."

"A new paper from Google's UK-based research company DeepMind addresses this with a model based on Contrastive Predictive Coding (CPC) that outperforms the fully-supervised AlexNet model in Top-1 and Top-5 accuracy on ImageNet."

So basically what's going on here is that the computer is given a set of inputs without the correct answers for the output (called "labels" in machine learning parlance), and they run it through a feature extractor network and then a context network to obtain a "representation" of the input (which in this case is an image). The context network is used to train the feature extractor network to predict the features that predict "context" (that is, surrounding parts of the image). Afterward the context network is ripped out and replaced with a classifier network, and the classifier network is trained on a small set of labeled inputs. When you then turn this combination of extractor+classifier network loose on images it's never seen before, surprisingly it does pretty good. They also talk about "fine tuning" the extractor network with the labeled data.
Thumbnail Those grain bins everywhere.
Thumbnail Boston Dynamics' "Spot" robot, a remote-controlled robot dog with a robotic arm on its head, the Veo Robotics FreeMove, a "collaborative robot", safe to operate in factories without being caged off from humans, the MIT/BMW algorithm that enables robots to navigate around humans more efficiently in factories (I posted about yesterday), and the Houston brain-to-computer interface that is able to "predict reward outcome" and figure out what its user wants to do.
Thumbnail "The five-year project probed when and where a critical family of transcription factor proteins (E2F family) is expressed in mammalian cells. Mammals have at least nine different E2F transcription factors that have either activation (on) or repressive (off) functions. All units within cells must work properly to make a functioning organ. 'Our DNA provides the code to make the multiple proteins, which are the functional units of our cells. Transcription is the first biological process that makes proteins from DNA, and transcription factors are the on and off switches for this process.'"

"Instead of studying cell division regulation in cultured cells or in vitro, the researchers used a whole-organism approach. Two major discoveries were made in this study. The most surprising discovery during this work was that the same E2F family of proteins is organized into two modules that work similarly in all cell types and organs in our bodies. "So it appears that a universal mechanism has evolved to control cell divisions, regardless of the diversity of cell types existing in our bodies."

"The second discovery was the development of tools that allow this level of precision in the analysis of proteins in complex tissues."

"The Leone lab harnessed the power of artificial intelligence to quantify transcription factors across numerous cells in mouse tissues. While deep learning-based tools have been used for medical imaging before, it was not advanced enough to recognize individual cells in microscopic images within tissues/organs."
Thumbnail "This tool, which is built around an algorithm called HeadXNet, improved clinicians' ability to correctly identify aneurysms at a level equivalent to finding six more aneurysms in 100 scans that contain aneurysms. It also improved consensus among the interpreting clinicians. While the success of HeadXNet in these experiments is promising, the team of researchers -- who have expertise in machine learning, radiology and neurosurgery -- cautions that further investigation is needed to evaluate generalizability of the AI tool prior to real-time clinical deployment given differences in scanner hardware and imaging protocols across different hospital centers."

To train the algorithm, the researchers labeled aneurysms on 611 computed tomography (CT) angiogram head scans. "We labelled, by hand, every voxel -- the 3D equivalent to a pixel -- with whether or not it was part of an aneurysm. Building the training data was a pretty grueling task."
Thumbnail "The tool can deduce the origins of any microbiome."

"The new computational tool, called 'FEAST,' can analyze large amounts of genetic information in just a few hours, compared to tools that take days or weeks."

"The source-tracking program gives the percentage of the microbiome that came from somewhere else. It's similar in concept to a census that reveals the countries that its immigrant population came from, and what percentage each group is of the total population."

"For example, using the source-tracking tool on a kitchen counter sample can indicate how much of that sample came from humans, how much came from food, and specifically which types of food."
Thumbnail Method to eliminate gene editing attempts that stray off target. "The researchers optimized the way enzymes interacted with RNA, and they engineered gene editing tools that caused zero off-target effects."

They did this by "engineering deaminases". Deaminases are enzymes that catalyze the removal of an amino group from a molecule.
Thumbnail New technology "harnesses bacterial jumping genes to reliably insert any DNA sequence into the genome without cutting DNA. Current gene-editing tools rely on cutting DNA, but those cuts can lead to errors.

"Sam Sternberg, PhD, assistant professor of biochemistry & molecular biophysics at Columbia and senior author of the new study, and three graduate students looked to bacteria to find variations of well-studied CRISPR-Cas systems with unusual properties that would reveal new tool capabilities. This search led them to a transposon, or 'jumping gene,' found in the bacterium Vibrio cholerae."

"They found that the transposon integrates into specific sites in the bacterial genome not by cutting DNA into two, but by using a separate enzyme to slip the transposon into the genome. Importantly, the site where the enzyme, an integrase, inserts the DNA is completely controlled by its associated CRISPR system."
Thumbnail Bayesian model (no neural net) that predicts human movements in factories so robots can dodge them. The model doesn't just predict movement in a straight line but predicts complex movement, such as criss-crossing a hallway, and does it better than previous models.
Thumbnail "More than 15 years after scientists first mapped the human genome, most diseases still cannot be predicted based on one's genes, leading researchers to explore epigenetic causes of disease. But the study of epigenetics cannot be approached the same way as genetics, so progress has been slow. Now, researchers at the USDA/ARS Children's Nutrition Research Center at Baylor College of Medicine and Texas Children's Hospital have determined a unique fraction of the genome that scientists should focus on. Their report, which provides a 'treasure map' to accelerate research in epigenetics and human disease, was published today in Genome Biology."

"To identify genomic regions in which DNA methylation differs between people but is consistent across different tissues, they profiled DNA methylation throughout the genome in three tissues (thyroid, heart and brain) from each of 10 cadavers."

"Since these tissues each represent a different layer of the early embryo, we're essentially going back in time to events that occurred during early embryonic development."
Thumbnail The first multicellular cells probably didn't resemble modern-day sponge cells, they probably resembled stem cells.
Thumbnail Christopher Barnatt of "ExplainingComputers" gives the Nvidia Jetson Nano a whirl. The Nvidia Jetson Nano is a small, low-power, single-board computer just for running neural networks.
Thumbnail Apple buying Drive.ai, an autonomous vehicle startup, according to rumor. "It's unclear how much Apple is paying. Drive.ai has raised about $77 million in funding since it was founded in 2015, and was valued at about $200 million in 2017, according to Pitchbook data."

"The deal is a so-called acqui-hire, where larger technology companies buy small startups to gain talent. Apple is planning to pick which Drive.ai staff it wants to keep, and the tech giant won't be using any intellectual property from the startup, the people said."
Thumbnail AI that sounds like Bill Gates.
Thumbnail "Recently I attended the west coast premiere of General Magic. It's a documentary movie about the rise and fall of a startup building one of the first PDAs in the early 1990's. Similar to Apple's Newton and predating the PalmPilot, General Magic incorporated all of the key ideas of today's modern smartphone into a single product long before the technologies (for both the device and the networks) were ready to support a great user experience."

"Unfortunately, General Magic's product failed - the price was way too high, wireless networks were in their relative infancy, and the device was slow. It took about 13 years for the underlying technologies to mature enough for Apple to release their first iPhone. And (not coincidentally) many of the same engineers at General Magic played key roles in bringing the iPhone and Android smartphones to market."

"I think consumer robotics is roughly 10-15 years away from a major iPhone-like moment."
Thumbnail "The autonomous-driving startup Cruise Automation, which was acquired by General Motors in 2016, is facing technological issues as it seeks to launch an autonomous ride-hailing service by the end of this year, The Information's Amir Efrati reported." "Among the issues reportedly experienced by Cruise vehicles are near-accidents, getting stuck in the middle of a trip, taking 80% longer to complete a trip than a human driver would, and erratic braking and steering." "The vehicles are expected to be only around 5%-10% as safe as human-driven vehicles by the end of this year."

"In the middle of April, Honda Motor CEO Takahiro Hachigo hopped into a self-driving car prototype made by General Motors' Cruise Automation for a demonstration ride. It didn't go well. About 20 minutes in, the car's software suddenly turned itself off even as the car kept moving. A man sitting behind the wheel -- the backup driver -- had to take control. Attempts to restart the system failed, and a second Cruise vehicle had to pick up Mr. Hachigo to finish the demonstration."
Thumbnail "More than three years ago, self-driving trucks startup Starsky Robotics was founded to solve a fundamental issue with freight -- a solution that CEO Stefan Seltz-Axmacher believes hinges on getting the human driver out from behind the wheel. But a funny thing happened along the way. Starsky Robotics started a regular ol' trucking company."

"Starsky's trucking business, which has been operating in secret for nearly two years alongside the company's more public pursuit of developing autonomous vehicle technology, has hauled 2,200 loads for customers. The company has 36 regular trucks that only use human drivers to haul freight. It has three autonomous trucks that are driven and supported by a handful of test drivers."
Thumbnail DeepMind, in collaboration with Google Brain, has now open sourced its Hanabi Learning Environment (HLE).

"Named after the Japanese word for 'fireworks', Hanabi is a cooperative game for two to five players, who must play their cards in a specific order to trigger a simulated pyrotechnics display. Each player's cards are visible to all other players but not themselves. In turn, players choose to either give information, discard a card, or play a card. Most players follow basic conventions and some have developed advanced strategies such as priority prompts, priority finesses, and bluffs. The game is challenging for AI agents as it is based on imperfect information, limited communication and reasoning, and successful leveraging of theory of mind."
Thumbnail Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung have announced they are shipping 5 nanometer chips in "risk production" -- initial customers are taking a chance it will work for their designs. 5 nanometers is special because it is the first time chip manufacturers are using extreme ultraviolet (EUV) lithography. "With a wavelength of just 13.5 nm, EUV light can produce extremely fine patterns on silicon. Some of these patterns could be made with the previous generation of lithographic tools, but those tools would have to lay down three or four different patterns in succession to produce the same result that EUV manages in a single step."

Only Samsung and TSMC are offering 5-nm foundry services. GlobalFoundries gave up at 14 nm and Intel, which is years late with its rollout of an equivalent to competitors' 7 nm, is thought to be pulling back on its foundry services.
Thumbnail Robot riding hovershoes.
Thumbnail "Meshing" with augmented reality.
Thumbnail Soccer (aka football) with Sphero minis. They play soccer/football with a plastic attachment they call a "chariot" that pushes the ball around.
Thumbnail Video animation of Ikea's "Rognan" robotic furniture for tiny apartments in Hong Kong.
Thumbnail Google Research Football is a simulated football environment, basically a video game with a physics-based engine. This was developed by a research team in Zürich, so 'football' means "soccer" if you're in the USA. (Never mind that "football" is actually the more logical name.) The two teams are called the "Real Bayesians" and "Frequentists United". Though whether they actually use Bayesian or frequentist probability, or no probability-based algorithms at all, depends on what reinforcement learning code you write and plug into to it, which can be anything you want.

Run and score, counterattack, pass and shoot, corner kick, see you on the pitch!
Thumbnail Google Research Football is a simulated football environment, basically a video game with a physics-based engine. This was developed by a research team in Zürich, so "football" means "soccer" if you're in the USA. (Never mind that "football" is actually the more logical name.) The two teams are called the "Real Bayesians" and "Frequentists United". Though whether they actually use Bayesian or frequentist probability, or no probability-based algorithms at all, depends on what reinforcement learning code you write and plug into to it, which can be anything you want.

"As a reference, we provide benchmark results for two state-of-the-art reinforcement learning algorithms: DQN and IMPALA, which both can be run in multiple processes on a single machine or concurrently on many machines. We investigate both the setting where the only rewards provided to the algorithm are the goals scored and the setting where we provide additional rewards for moving the ball closer to the goal."
Thumbnail A fake subreddit on Reddit where someone used OpenAI's GPT 2 to generate the entire subreddit.
Thumbnail Exposing mice to strong magnetic fields for 2 hours caused changes in bilirubin, white blood cell, platelet, and lymphocyte levels, but they still stayed within normal reference ranges, and the magnetic fields didn't appear to have any long-term harmful effects.
Thumbnail What's more important, keeping the sky clear of satellites for astronomy, or providing internet to the poorest 3 billion people? Fraser Cain weighs in on SpaceX's Starlink launch.
Thumbnail Unbeknownst to me until now, in March, the KickSat-2 project launched 105 "cracker-sized" (look at the photo) satellites into space, which communicated with each other and the ground.

"This isn't the start of a semi-permanent thousands-strong constellation, though -- the satellites all burned up a few days later, as planned."
Thumbnail China did its first sea-based satellite launch.
Thumbnail "Rust: A Language for the Next 40 Years." This is a non-technical talk about the philosophy behind the Rust language, using analogies to the real world such as the railroad industry.
Thumbnail OpenAI's full-sized (1.5 billion parameter) GPT2 model, which OpenAI didn't release due to concerns people could use it to create "fake news," has been replicated by a computer science student in Bavaria, Germany. He plans to release the model into the wild on July 1st. He justified this by writing an essay on "trust" and "the curious hacker."
Thumbnail Twitter acquired Fabula.ai. Fabula AI is "a London-based start-up, with a world-class team of machine learning researchers who employ graph deep learning to detect network manipulation. Graph deep learning is a novel method for applying powerful ML techniques to network-structured data. The result is the ability to analyze very large and complex datasets describing relations and interactions, and to extract signals in ways that traditional ML techniques are not capable of doing."

"This strategic investment in graph deep learning research, technology and talent will be a key driver as we work to help people feel safe on Twitter and help them see relevant information. Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience."
Thumbnail A "record-shattering underwater sound with an intensity that eclipses that of a rocket launch" has been produced. "The intensity was equivalent to directing the electrical power of an entire city onto a single square meter, resulting in sound pressures above 270 decibels."

"When the X-ray laser hit the jet, it vaporized the water around it and produced a shockwave. As this shockwave traveled through the jet, it created copies of itself, which formed a 'shockwave train' that alternated between high and low pressures. Once the intensity of underwater sound crosses a certain threshold, the water breaks apart into small vapor-filled bubbles that immediately collapse. The pressure created by the shockwaves was just below this breaking point, suggesting it was at the limit of how loud sound can get underwater."

I wanted to get the paper to learn more about this, but it was paywalled. The concept of there being a "maximum loudness", at least underwater, is intriguing, but the abstract doesn't actually say anything about a loudness limit (and the abstract is all I can get since the paper is paywalled). The abstract says, "We investigated the generation and propagation of ultrasonic pressure waves produced by focused x-ray free-electron laser pulses in 14 to 30 μm diameter liquid water microjets. The pressure waves formed through reflections, at the surface of the microjets, of the initial shock launched in the liquid by the x-ray pulse. These waves developed a characteristic geometric pattern which is related to, but different from, the shock structures of supersonic gas jets. Fully developed waves had initial peak pressures ranging from less than –24 MPa to approximately 100 MPa, which exceed the compressive and tensile strengths of many materials, and correspond to extreme sound intensities on the order of 1 GW/m^2 and sound pressure levels above 270 dB (re: 1 μPa). The amplitudes and intensities were limited by the wave destroying its own propagation medium though cavitation, and therefore these ultrasonic waves in jets are one of the most intense propagating sounds that can be generated in liquid water. The pressure of the initial shock decayed exponentially, more rapidly in thinner jets, and the decay length was proportional to the jet diameter within the accuracy of measurements. Extrapolating our results to thinner jets, we find that the pressure waves may damage protein crystals carried by liquid jets in x-ray laser crystallography experiments conducted at megahertz repetition rates."

Pa is pascals, the metric unit of pressure. 1 Pa = 0.000145038 psi (pounds per square inch, the non-metric unit of pressure you might be familiar with), or, to put another way, 1 MPa (megapascal) = about 145 psi.
Thumbnail Docking system for autonomous boats and autonomous underwater vehicles (AUVs). "The researchers describe roboat units that can now identify and connect to docking stations. Control algorithms guide the roboats to the target, where they automatically connect to a customized latching mechanism with millimeter precision. Moreover, the roboat notices if it has missed the connection, backs up, and tries again."

"The researchers tested the latching technique in a swimming pool at MIT and in the Charles River, where waters are rougher. In both instances, the roboat units were usually able to successfully connect in about 10 seconds, starting from around 1 meter away, or they succeeded after a few failed attempts."

The article doesn't say how the system works, but it works by combining acoustic, electromagnetic, and visual sensors that operate at different ranges, and a latching mechanism based on ball-socket joints that allows for rotation on 3 axes. The visual system uses a system called apriltags, which are tags that look kind of like QR codes, which give the autonomous boats/AUVs accurate points of reference that they can use to position themselves for the docking.
Thumbnail "Ikea is launching a new robotic furniture system called Rognan, developed in collaboration with American furniture startup Ori Living. The large storage unit, controlled by a touchpad, can slide across a room to divide a small room into two living spaces, and contains a bed, desk, and a couch for people to pull out when needed. It's designed for people living in urban areas to maximize their small spaces, and will launch first in Hong Kong and Japan in 2020."
Thumbnail "After training to hand-write Japanese characters, the robot then turned around and started to copy words in a slew of other languages it'd never written before, including Hindi, Greek, and English, just by looking at examples of that handwriting. Not only that, it could do English in print and cursive. Oh, and then it copied a drawing of the Mona Lisa on its own for good measure."

"Their learning system is split into two distinct models. A 'local' model is in charge of what's going on with the current stroke of the pen -- so aiming in the right direction and determining how to end the stroke. And a 'global' model is in charge of moving the robot's writing utensil to the next stroke of the character."
Thumbnail New video of Amazon's warehouse robots, on Amazon's "Amazon News" channel, with "Amazon's newest robots mean new jobs" as the title, and an interview with one person who works with the robots.
Thumbnail Neural net tries to name cats. The latest from (who else?) Janelle Shane.

Honeystring
Dr Leg
Tom Noodle
Pinball Scene
Peanutbutterjiggles
You're Telling A Lie
Beep Boop
Thoughts
Bobble Bun
Atmosphere
You Name It
Whiskeridoo
Sparky Buttons

If I ever have a cat maybe I'll name it "Thoughts." Thoughts?
Thumbnail "Here's a prognosis: As soon as GANs have become proper Photoshop filters, a thing we can expect from looking at the work of David Bau and others, the mimetic problem will disappear. At least it will stop being the focus of aesthetic explorations of artificial intelligence, much like art accessing the Internet is not presented as NetArt anymore."

"These are exciting times, for science and art alike. We are not, however, in the middle of an artistic revolution, and even less so are artists in danger of being replaced by machines any time soon."
Thumbnail Algorithms that are supposed to relieve traffic congestion when a small percentage of cars are autonomous vehicles (as small as 10% in their simulations, though they say other research shows there is an effect at only 3-4%). Specifically what they did here was design a set of benchmarks that people looking to do this can incorporate into reinforcement learning algorithms. The actual learning work still has to be done. They say the learning algorithms should adapt to different benchmarks depending on different traffic scenarios.
Thumbnail Bill Gates and Warren Buffett pick up a shift at Dairy Queen.
Thumbnail Winners of autonomous seafloor-mapping competition announced. In the main competition, each team had 24 hours to map 250 square kilometers of seafloor at 5m resolution, with the maps compared against an existing high-quality map. For the bonus prize, the competition was to detect a chemical signal and trace it back to the device responsible.

The winning team was from the University of New Hampshire. For the bonus prize, the prize was split between a team of junior high and high school students from San Jose, CA and a team from Tampa, FL.
Thumbnail "A DNA material with capabilities of metabolism, in addition to self-assembly and organization -- three key traits of life" has been constructed.

"The Cornell engineers created a biomaterial that can autonomously emerge from its nanoscale building blocks and arrange itself -- first into polymers and eventually mesoscale shapes. Starting from a 55-nucleotide base seed sequence, the DNA molecules were multiplied hundreds of thousands times, creating chains of repeating DNA a few millimeters in size. The reaction solution was then injected in a microfluidic device that provided a liquid flow of energy and the necessary building blocks for biosynthesis."

"As the flow washed over the material, the DNA synthesized its own new strands, with the front end of the material growing and the tail end degrading in optimized balance. In this way, it made its own locomotion, creeping forward, against the flow, in a way similar to how slime molds move."

"The designs are still primitive, but they showed a new route to create dynamic machines from biomolecules. We are at a first step of building lifelike robots by artificial metabolism."
Thumbnail "Swedish job candidates to be grilled by robotic interviewer." "The purpose of the robots, which were developed by Furhat Robotics at the KTH Royal Institute of Technology in partnership with recruitment firm TNG, is to ensure applicants face the same interview procedure without the interviewer relying on gut feeling."

"It is becoming very popular for organisations to be able to say they have a discrimination-free recruitment process. We want to take this idea as far as possible."
Thumbnail Robot at a hotel in Singapore makes an omelette.
Thumbnail Robot skyscraper window washers.
Thumbnail Robot painter given humanoid form and a name.
Thumbnail "Grover is a state-of-the-art detector for Neural Fake News."

"Counterintuitively, Grover is also a state-of-the-art generator for Neural Fake News. Fill out some article pieces below and press generate next to the piece you want to generate. You can also fill in an article to detect if it was Grover-written or Human-written."

To give it a whirl, I clicked "Generate" without changing any of the default parameters, and it said:

New Study Provides Evidence that Vaccines Cause Autism

The U.S. is currently in the midst of a new health crisis, and this time, the symptoms of this health crisis are for parents to worry about their own children. New research shows that vaccines may be linked to autism, which can only mean one thing -- more anti-vaccine propaganda from the likes of celebrity-turned-anti-vaxxer Jenny McCarthy.

Vaccines have been on the research radar for years now, and the latest study to come out of Cornell University looks at the potential links between vaccinations and autism spectrum disorder. According to a New York Times report, the study has already received some backlash from some researchers, who believe that not enough study was conducted to fully understand the exact link between vaccines and autism.

In the new study, the researchers used data from the National Immunization Survey -- a study conducted by the Centers for Disease Control and Prevention that has been conducted for the past 16 years. They published their findings in the journal, BMC Public Health, and included only children of parents who claimed to be aware of the condition.

Jenny McCarthy, an anti-vaccine activist and former special education teacher, has been a vocal supporter of questionable anti-vaccine claims, also garnering public attention for a recent controversial television interview in which she claimed that she has a son with autism and that he was treated for vaccinations, according to Fox News.

Parents Watch Jenny McCarthy In Doc's Quest To Vax Your Child: Their Lifetime Of Vitiligo Shows The Autism Effects Of Vaxxed https://t.co/oG3FTcg2Wt -- ☕netw3rk (@netw3rk) May 11, 2018

The researchers examined the prevalence of autism rates, as well as the quantity and types of vaccinations. The vaccines covered in the study were "Flu, RSV, pneumonia and Haemophilus influenzae type b, diphtheria, tetanus, pertussis, hepatitis A, and H1N1," and were given to children after age 9 months. The researchers were also curious about the parents of children with autism, as the common theme for the parents with autism seems to be a "mythic association" with a "magic vaccine."

"Our results highlight the question of causality for vaccines-autism among these families," said lead author and Cornell College of Veterinary Medicine researcher Ragnhild Minnesot, according to the Times.

Minnesot and her team ultimately found a very "tight correlation" between the frequency of autism symptoms and the vaccines given to the children. The researchers concluded that the current evidence leads them to believe that vaccines may be the cause of autism in some children. However, in order to really draw direct conclusions from the research findings, the researchers warn that more research will be needed, as there simply isn't enough information available at this time to really make that connection. They also note that they don't believe the research provides information on the MMR vaccine alone, as there is "no lasting association between MMR vaccination and autism spectrum disorder."
Despite the critics who believe that the research doesn't capture the actual causes of autism, the mother of a child with autism who lives just a few miles from Minnesot's study area, Hillary Poje, is all in. She was also a nurse in the U.S. Army Reserves, and she encourages others with loved ones who are autistic to vaccinate them.

"This research is not convincing me in the slightest," she told the Times. "I didn't vaccinate my son when he was 8 months old because of the fear that it would trigger another autism onset. I followed the doctor's advice and waited a year. Unfortunately, the only result of that was that my son ended up having Prader-Willi syndrome."

Dwight Dickens, the lead researcher of the study, made it clear that more research will be needed before he can really make a firm connection between vaccines and autism, and that it isn't likely to be made with this particular study.

"I just don't know that we can make any connection," he said.

Nonetheless, there is no shortage of parents in today's world who continue to believe the anti-vaccine claims put forth by famous celebrities, a tragedy that brings attention to the symptoms of the health crisis as the researchers continue to work to present accurate information.

"Given the harm and expense associated with autism, I cannot allow this medical crisis to be used as a means of avoiding needed vaccinations," said Poje.

"The fundamental fact is that vaccines work," Dickens added.
Thumbnail "Neural lander" system employs a deep neural network to overcome the challenge of ground-effect turbulence when landing a drone.
Thumbnail "Neural lander" system employs a deep neural network to overcome the challenge of ground-effect turbulence when landing a drone. "Complex turbulence is created by the airflow from each rotor bouncing off the ground as the ground grows ever closer during a descent. This turbulence is not well understood nor is it easy to compensate for, particularly for autonomous drones. That is why takeoff and landing are often the two trickiest parts of a drone flight. Drones typically wobble and inch slowly toward a landing until power is finally cut, and they drop the remaining distance to the ground."

"To make sure that the drone flies smoothly under the guidance of the deep neural network, the team employed a technique known as spectral normalization, which smooths out the neural net's outputs so that it doesn't make wildly varying predictions as inputs or conditions shift. Improvements in landing were measured by examining deviation from an idealized trajectory in 3D space. Three types of tests were conducted: a straight vertical landing; a descending arc landing; and flight in which the drone skims across a broken surface -- such as over the edge of a table -- where the effect of turbulence from the ground would vary sharply."

"The new system achieves actual landing rather than getting stuck about 10 to 15 centimeters above the ground, as unmodified conventional flight controllers often do. Further, during the skimming test, the Neural Lander produced a much a smoother transition as the drone transitioned from skimming across the table to flying in the free space beyond the edge."
Thumbnail "Coronary artery disease is the most common type of heart disease, killing more than 370,000 people in the United States annually. SPECT MPI, which is widely used for its diagnosis, shows how well the heart muscle is pumping and examines blood flow through the heart during exercise and at rest. On new cameras with a patient imaged in sitting position, two positions (semi-upright and supine) are routinely used to mitigate attenuation artifacts. The current quantitative standard for analyzing MPI data is to calculate the combined total perfusion deficit (TPD) from these 2 positions. Visually, physicians need to reconcile information available from 2 views."

"Deep learning (DL) analysis of data from the two-position stress MPI was compared with the standard TPD analysis of 1,160 patients without known coronary artery disease. Patients underwent stress MPI with the nuclear medicine radiotracer technetium (99mTc) sestamibi. New-generation solid-state SPECT scanners in four different centers were used, and images were quantified at the Cedars-Sinai Medical Center in Los Angeles, California. All patients had on-site clinical reads and invasive coronary angiography correlations within six months of MPI.

"The study revealed that 718 (62 percent) patients and 1,272 of 3,480 (37 percent) arteries had obstructive disease. Per-patient sensitivity improved from 61.8 percent with TPD to 65.6 percent with DL, and per-vessel sensitivity improved from 54.6 percent with TPD to 59.1 percent with DL. In addition, DL had a sensitivity of 84.8 percent, versus 82.6 percent for an on-site clinical read."
Thumbnail "Based on analyses of 130,000 written Danish assignments, scientists can, with nearly 90 percent accuracy, detect whether a student has written an assignment on their own or had it composed by a ghostwriter."

'Danish high schools currently use the Lectio platform to check if a student has handed in plagiarized work that has passages copied directly from a previously submitted assignment."

"The problem today is that if someone is hired to write an assignment, Lectio won't spot it. Our program identifies discrepancies in writing styles by comparing recently submitted writing against a student's previously submitted work. Among other variables, the program looks at: word length, sentence structure and how words are used. For instance, whether 'for example' is written as 'ex.' or 'e.g.'." "The program, Ghostwriter, is built around machine learning and neural networks -- branches of artificial intelligence that are particularly useful for recognizing patterns in images and texts."
Thumbnail PyTorch is now supported by the deep learning compiler TVM. "Usage is simple:

import torch_tvm
torch_tvm.enable()

"That's it! PyTorch will then attempt to convert all operators it can to known Relay operators during its JIT compilation process."
Thumbnail The current "deep learning" approach to autonomous vehicles won't work because an AI approach capable of "common sense" is necessary, says Melanie Mitchell, computer science professor and author of Complexity: A Guided Tour (2009) and Artificial Intelligence: A Guide for Thinking Humans (forthcoming in 2019). "The challenges for autonomous vehicles probably won't be solved by giving cars more training data or explicit rules for what to do in unusual situations. To be trustworthy, these cars need common sense: broad knowledge about the world and an ability to adapt that knowledge in novel circumstances. While today's AI systems have made impressive strides in domains ranging from image recognition to language processing, their lack of a robust foundation of common sense makes them susceptible to unpredictable and unhumanlike errors."
Thumbnail Prediction from Tom Warren at The Verge that Huawei's Android and Windows OS replacements will fail. Because just about everyone else who's tried has failed. "Mozilla tried with its Firefox OS for years before giving up in 2015, Canonical pushed Ubuntu phones that never went anywhere, and Microsoft famously tried to create a third mobile operating system with Windows Phone. Even Samsung, once a big threat to Google's version of Android, has all but given up on its Tizen operating system for phones, using it to power the company's smartwatches and TVs instead. And let's not even talk about what happened to BlackBerry."

He goes on to point out that even if you could make a credible Android compatible phone, you won't have Chrome, Gmail, YouTube, Google Maps, Google Docs, etc.

That's his opinion, so here's my response opinion. What he doesn't seem to think of, and what might make this time different, is the fact that every phone maker now everywhere in the world is now freaking out over the fact that a technologically clueless person (no, knowing how to tweet doesn't count) in a position of power can instantly destroy multi-billion-dollar businesses with a dumb decision. Well, they're not going public about it, but you can be sure every phone maker in the world is watching Huawei in astonishment and trying to figure out what their Plan B is if the same thing happens to them. And they're looking at their dependence on US hardware as well as software -- Qualcomm chipsets and ARM processors and so on. If the cell phone companies outside the US can get the Chinese population, which is 1.4 billion, to support non-US alternatives, then non-US alternatives to operating systems, CPUs, mobile phone chipsets, and other technologies could become viable. And Tom Warren's predictions could turn out to be wrong. This time could be different. Just my opinion for what it's worth.
Thumbnail "Some recent results (Gradient visualization, DeepLIFT, InfoGAN) show it is possible to gain some insight into neural networks by looking at layer activations. But at AAAI we saw Ghorbani, et al. show that these techniques are fragile and adversarial techniques can be used to generate inputs that arbitrarily move layer activations while still giving the correct classification result. We also saw several papers, such as this one on climate, discussing the benefits of classical AI over deep networks with respect to interpretability."

"Social AI is concerned with the construction of robots and conversational agents that exhibit social characteristics (e.g. small talk, facial expressions, give-and-take conversation, ...)."

"Fairness in machine learning attempts to resolve the growing concern about automated decision models with respect to protected classes like race, gender, or other axes and the resulting policies that come from these automated models."

"As AI techniques are applied to increasingly complex real-world environments, there is a need for more sophisticated, high fidelity simulations for training purposes. For example, in order for self-driving cars to reach the point where there is a high degree of trust in their safety, many hours behind the wheel are needed. The more of this that can be done in simulation, the more cost-effective and rapid the solutions will arrive. However, the many ways in which the real world can differ from simulation can undermine the simulation's effectiveness."
Thumbnail Google made a "People + AI" guidebook for "designing human-centered AI products". "It was written for user experience (UX) professionals and product managers as a way to help create a human-centered approach to AI on their product teams."

"There are six chapters, each with exercises, worksheets, and resources to help turn guidance into action: User Needs + Defining Success, Data Collection + Evaluation, Mental Models, Explainability + Trust, Feedback + Control, and Errors + Graceful Failure."

"Even the best AI will fail if it doesn’t provide unique value to users." "Sourcing and evaluating the data used to train AI involve important considerations." "AI-powered systems can adapt over time. Prepare users for change -- and help them understand how to train the system." "Explaining predictions, recommendations, and other AI output to users is critical for building trust." "When users give feedback to AI products, it can greatly improve the AI performance and the user experience over time." "When AI makes an error or fails, things can get complicated."
Thumbnail "We are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot's motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then capture that physical response and send it back to the robot, which helps it avoid falling, too. Through this human-robot link, the robot can harness the operator's innate motor skills and split-second reflexes to keep its footing."

"Future disaster robots will ideally have a great deal of autonomy. Someday, we hope to be able to send a robot into a burning building to search for victims all on its own, or deploy a robot at a damaged industrial facility and have it locate which valve it needs to shut off. We're nowhere near that level of capability. Hence the growing interest in teleoperation."
Thumbnail Bill Gates's "breakthrough technologies" -- technologies he thinks will not only go somewhere but also do the most good. Lab-grown meat, AI virtual assistants, the reinvented toilet, nuclear power, innovation in China.
Thumbnail AI plays capture the flag in Quake III Arena. "To train the AI to work as a team, the scientists created 30 different bots and pitted them against each other in a series of matches on randomly generated maps." "The only data the bots had to learn from was the first-person visual perspective of their character and game points, awarded for things like picking up flags or tagging opponents."

"After 450,000 games, the researchers arrived at the best bot, which they named For The Win (FTW). They then tested it in various matches with a mirror FTW, an FTW bot missing a crucial learning element, the game's in-built bots, and humans. Teams of FTW bots consistently outperformed all other groups, though humans paired with FTW bots were able to beat them 5% of the time."

"The FTW bots learned to play seamlessly with humans and machines, and they even developed classic cooperative strategies." "Those strategies included following teammates in order to outnumber opponents in later firefights and loitering near the enemy base when their teammate has the flag to immediately grab it when it reappears. In one test, the bots invented a completely novel strategy, exploiting a bug that let teammates give each other a speed boost by shooting them in the back."
Thumbnail Last week SpaceX started launching Starlink satellites. The plan is 12,000 satellites orbiting at 550 km providing broadband internet to the entire planet. This launch is 60 satellites.
Thumbnail Secure and Private AI course on Udacity taught by Andrew Trask (OpenMined/DeepMind). "Learn how to extend PyTorch with the tools necessary to train AI models that preserve user privacy."

Part of Udacity's Deep Learning nanodegree program.
Thumbnail Video of the OpenAI Five vs OG (human team) Dota 2 match. OpenAI won 2-0.

I'm not a Dota player so I couldn't really follow the game but it seemed like a pretty exciting game. At least the first game. The AI actually did better in the second game so it wasn't as close.
Thumbnail Strawberry picking robot. Key to the system is a vision system which understands the plant structure, detects the ripe strawberries and determines the berries' position within millimeter accuracy and enables the gripper to exert force that breaks the stem without bruising the berry.
Thumbnail "DeepFake" where Bill Hader's face is morphed into Arnold Schwarzenegger's face (with Bill Hader's haircut) whenever he does an Arnold Schwarzenegger impersonation. The transition is remarkably smooth.
Thumbnail "Many mutations in DNA that contribute to disease are not in actual genes but instead lie in the 99% of the genome once considered 'junk.'" "Using artificial intelligence, a Princeton University-led team has decoded the functional impact of such mutations in people with autism."

"The researchers analyzed the genomes of 1,790 families in which one child has autism spectrum disorder but other members do not. The method sorted among 120,000 mutations to find those that affect the behavior of genes in people with autism."

"Most previous research on the genetic basis of disease has focused on the 20,000 known genes and the surrounding sections of DNA that regulate those genes. However, even this enormous amount of genetic information makes up only slightly more than 1% of the 3.2 billion chemical pairs in the human genome. The other 99% has conventionally been thought of as 'dark' or 'junk,' although recent research has begun to disrupt that idea."

"In their new finding, the research team offers a method to make sense of this vast array of genomic data. The system uses an artificial intelligence technique called deep learning in which an algorithm performs successive layers of analysis to learn about patterns that would otherwise be impossible to discern. In this case, the algorithm teaches itself how to identify biologically relevant sections of DNA and predicts whether those snippets play a role in any of more than 2,000 protein interactions that are known to affect the regulation of genes. The system also predicts whether disrupting a single pair of DNA units would have a substantial effect on those protein interactions."

"The algorithm 'slides along the genome' analyzing every single chemical pair in the context of the 1,000 chemical pairs around it, until it has scanned all mutations. The system can thus predict the effect of mutating each and every chemical unit in the entire genome. In the end, it reveals a prioritized list of DNA sequences that are likely to regulate genes and mutations that are likely to interfere with that regulation."

"Prior to this computational achievement, the conventional way to glean such information would be painstaking laboratory experiments on each sequence and each possible mutation in that sequence. This number of possible functions and mutations is too big to contemplate -- an experimental approach would require testing each mutation against more than 2,000 types of protein interactions and repeating those experiments over and over across tissues and cell types, amounting to hundreds of millions of experiments. Other research groups have sought to accelerate this discovery by applying machine learning to targeted sections of DNA, but had not achieved the ability to look at each DNA unit and each possible mutation and the effects on each of more than 2,000 regulatory interactions across the whole genome."
Thumbnail "In New York and other states across the country, authorities are acquiring technology to extract and digitize the voices of incarcerated people into unique biometric signatures, known as voice prints. Prison authorities have quietly enrolled hundreds of thousands of incarcerated people's voice prints into large-scale biometric databases. Computer algorithms then draw on these databases to identify the voices taking part in a call and to search for other calls in which the voices of interest are detected. Some programs, like New York's, even analyze the voices of call recipients outside prisons to track which outsiders speak to multiple prisoners regularly."
Thumbnail Voice translation that keeps your voice in the translated language from Google. Listen to the sound samples. The way the system works is by using a single end-to-end translation system instead of dividing the task into separate stages, which makes it more straightforward to retain the voice of the original speaker after translation. This approach is also faster, avoids compounding errors between speech recognition and translation, and better handling of words that don't need to be translated (e.g., names and proper nouns).

It's a sequence-to-sequence neural network which takes spectrograms as input and generates spectrograms in the target language as output.
Thumbnail "Reprogramming War," a report on the state of the military AI race from PAX, a partnership between IKV (Interchurch Peace Council) and Pax Christi (Netherlands).

The report details the state of AI in the United States, China, Russia, the UK, France, Israel, and South Korea. Concludes that we are seeing the start of an AI arms race. All these countries are developing military AI, integrating tech companies and research universities with the military, increasing their military AI research spending, including AI in their military strategy plans, and increasing political rhetoric regarding concerns of falling behind adversaries. China has the most institutionalized military-private sector integration. Russia has the greatest military-university cooperation. The US government is the only government that has published an official policy on lethal autonomous weapons use (the 3000.09 Directive). Israel, the US, and Russia, claim potential humanitarian benefits to lethal autonomous weapons. The US argues that advances in autonomy may enhance the implementation of the law of war. South Korea is seeking defensive autonomous weapons. France has proposed regulating lethal autonomous weapons. France is the sole state to consider ethics within its national strategy. In the US, resistance from Silicon Valley has resulted in the creation of a Defense Innovation Board to come up with ethical principles for military AI.
Thumbnail Amazon is hiring German AI researchers and allegedly turning Tübingen into Germany's "Cyber Valley."
Thumbnail Facebook open-sourced Pythia, a deep learning framework "visual question answering" research . Q: "What is this cat wearing?" A: "Hat."
Thumbnail Low-wage workers who painstakingly label data for AI to train on have a name now, "AI sharecroppers."
Thumbnail "Need help deciphering that vague text message? AI wants to help." "What does that upside down smiley face mean?"

"I think the idea behind an app like Mei, is based on text messages and call logs that come in through your phone, it is capable of detecting and understanding everything from personality and mood to how we interact and when we interact with our contacts. As it builds this personality profile, it gives you feedback on your text. The examples of that were really interesting. It was everything from 'your mom loves you very much' to 'you seem like more of an introvert than an extrovert' or detecting abnormal behavior if you're not texting like your usual self."
Thumbnail AI2-THOR (which stands for "The House Of inteRaction") is an attempt at making a "photorealistic" (not really) environment for AI agents to learn in. It uses the Unity game engine.
Thumbnail I somehow didn't notice this until today but a few weeks ago, an OpenAI team of AIs beat the current Dota 2 world champion human team 2-0.
Thumbnail AI systems like Auto and Babylon enable people in remote areas to get a medical diagnosis. AI is better than humans at diagnosing lung cancer. There's a report of an AI doing better than pulmonologists on diagnosing respiratory disease. AI is accurate at skin cancer diagnosis. AI can predict death from heart attacks better than humans.
Thumbnail "The human visual system has a remarkable ability to make sense of our 3D world from its 2D projection. Even in complex environments with multiple moving objects, people are able to maintain a feasible interpretation of the objects' geometry and depth ordering."

"We train our depth-prediction model in a supervised manner, which requires videos of natural scenes, captured by moving cameras, along with accurate depth maps. The key question is where to get such data. Generating data synthetically requires realistic modeling and rendering of a wide range of scenes and natural human actions, which is challenging. Further, a model trained on such data may have difficulty generalizing to real scenes. Another approach might be to record real scenes with an RGBD sensor (e.g., Microsoft's Kinect), but depth sensors are typically limited to indoor environments and have their own set of 3D reconstruction issues."

"Instead, we make use of an existing source of data for supervision: YouTube videos in which people imitate mannequins by freezing in a wide variety of natural poses, while a hand-held camera tours the scene. Because the entire scene is stationary (only the camera is moving), triangulation-based methods--like multi-view-stereo (MVS)--work, and we can get accurate depth maps for the entire scene including the people in it. We gathered approximately 2000 such videos, spanning a wide range of realistic scenes with people naturally posing in different group configurations."

Videos of people imitating mannequins, really? "The Mannequin Challenge" is a thing?
Thumbnail "Imagine a robot trying to learn how to stack blocks and push objects using visual inputs from a camera feed. In order to minimize cost and safety concerns, we want our robot to learn these skills with minimal interaction time, but efficient learning from complex sensory inputs such as images is difficult. This work introduces SOLAR, a new model-based reinforcement learning (RL) method that can learn skills -- including manipulation tasks on a real Sawyer robot arm -- directly from visual inputs with under an hour of interaction. To our knowledge, SOLAR is the most efficient RL method for solving real world image-based robotics tasks."

There's video of a robot stacking (very large) Legos (very slowly and kind of jittery).

If you're wondering what SOLAR stands for, it's 'Stochastic Optimal Control with LAtent Representations.' If you're wondering what they mean by 'latent representations,' it's a representation that 'accurately captures the objects' the robot is looking at, specifically a representation that works well with the LQR-FLM method. If you're wondering what the LQR-FLM method is, well, LQR stands for "linear-quadratic regulator" but I don't know what FLM stands for, so that's about as much as I'll be able to explain for ya. All I can tell you is that LQR-FLM is a method for reinforcement learning that is supposed to work end-to-end, that is, from camera input directly to torques on the robots motors, with no hand-engineered components for perception, state estimation, and low-level control.
Thumbnail "Amazon is said to be working on a wrist-worn, voice-activated device that's supposed to be able to read human emotions."

"The device, working in sync with a smartphone app, is said to have microphones that can 'discern the wearer's emotional state from the sound of his or her voice."

"The unifying thread to all of Amazon's hardware efforts right now is to build out an ecosystem of Alexa-capable devices, with the rumored robot making Alexa more mobile and the alleged emotion-sensing wearable giving the voice assistant access to a whole new dimension of user awareness."
Thumbnail The article is paywalled but the actual solicitation from the Naval Supply Systems Command on the General Services Administration website is not. Basically the US Navy is asking for a private business that will collect 350 billion (minimum) publicly available social media posts, all from the same social media service, with at least 200 million unique users and from at least 100 countries, posted within a specific time period, which is July 1st, 2014 to December 31st, 2016. The contract is only for the collection of data, not the analysis of it. The stated purpose for collecting this data is to learn "fundamental social dynamics" and to "model the evolution of linguistic communities, and emerging modes of collective expression, over time and across countries."
Thumbnail Introduction to reinforcement learning with Python.
Thumbnail uTensor is TensorFlow for microcontrollers. "It has features designed to be forward compatible with advances in embedded systems: Editable C++ model implementations generated from trained model files, extensibility for Tensorflow Lite files, RTL for FPGAs, MLIR, and, other generations, the ability to place tensors in various memory devices, extensibility for optimized kernels, discrete accelerators, and remote services, and Python offline-optimization-tools to enable target-specific and data-driven optimization."
Thumbnail "At a high level, a computer graphics pipeline requires a representation of 3D objects and their absolute positioning in the scene, a description of the material they are made of, lights and a camera. This scene description is then interpreted by a renderer to generate a synthetic rendering.

"In comparison, a computer vision system would start from an image and try to infer the parameters of the scene. This allows the prediction of which objects are in the scene, what materials they are made of, and their three-dimensional position and orientation."

"Combining computer vision and computer graphics techniques provides a unique opportunity to leverage the vast amounts of readily available unlabelled data. As illustrated in the image below, this can, for instance, be achieved using analysis by synthesis where the vision system extracts the scene parameters and the graphics system renders back an image based on them. If the rendering matches the original image, the vision system has accurately extracted the scene parameters. In this setup, computer vision and computer graphics go hand in hand, forming a single machine learning system similar to an autoencoder, which can be trained in a self-supervised manner."
Thumbnail "Neurodata Without Borders: Neurophysiology (NWB:N) is a project to develop a unified data format for cellular-based neurophysiology data, focused on the dynamics of groups of neurons measured under a large range of experimental conditions. The NWB:N team consists of neuroscientists and software developers who recognize that adoption of a unified data format is an important step toward breaking down the barriers to data sharing in neuroscience."
Thumbnail "We wanted to use TensorFlow 2.0 to explore how well state-of-the-art natural language processing models like BERT and GPT-2 could respond to medical questions by retrieving and conditioning on relevant medical data, and this is the result.

"By combining the power of transformer architectures, latent vector search, negative sampling, and generative pre-training within TensorFlow 2.0's flexible deep learning framework, we were able to come up with a novel solution to a difficult problem that at first seemed like a herculean task."

"700,000 medical questions and answers scraped from Reddit, HealthTap, WebMD, and several other sites, fine-tuned TF 2.0 BERT with pre-trained BioBERT weights for extracting representations from text, fine-tuned TF 2.0 GPT-2 with OpenAI's GPT-2-117M parameters for generating answers to new questions, network heads for mapping question and answer embeddings to metric space, made with a Keras.Model feedforward network, and over a terabyte of TFRECORDS, CSV, and CKPT data."
Thumbnail Christine Payne, the creator of MuseNet, the AI music composer. Brief interview by Andrew Ng, who made the deep learning specialization at Coursera that she did.
Thumbnail A system for using only very simple maps and then driving visually through a complex environment like humans do has been developed for self-driving cars. "MIT researchers describe an autonomous control system that 'learns' the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. Then, the trained system can control a driverless car along a planned route in a brand-new area, by imitating the human driver."

"Similarly to human drivers, the system also detects any mismatches between its map and features of the road. This helps the system determine if its position, sensors, or mapping are incorrect, in order to correct the car's course."

"To train the system initially, a human operator controlled an automated Toyota Prius -- equipped with several cameras and a basic GPS navigation system -- to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests."
Thumbnail The new SALTO robot can jump for up to 10 minutes, doing hundreds of jumps, can jump up to 4 feet high, and it can run 8-10 mph. For those of you who use the intelligent measuring system, that's 120 cm high and 12-16 kpm. It can jump over obstacles, and can follow a moving target, but we have to tell it where it is. To do this it depends on a motion-capture environment and a laptop to send it radio signals. Outside the motion-capture room, it can only handle simple surfaces like concrete and wood. It can do this with its onboard inertial measurement unit.
Thumbnail Package delivery robot from Ford. The video shows humans packing the package and driving the car, though. Once robots can pack the package, drive the car (or more precisely, the car will be the robot and drive itself), and deliver the package, then we can order stuff online and have it delivered without any human involvement.