Engineers Program Human Cells to Record Analog Memories

post cells foto1
MIT biological engineers have devised a memory storage system illustrated here as a DNA-embedded meter that is recording the activity of a signaling pathway in a human cell. (Image courtesy of MIT. )

A team of biological engineers has devised a way to record complex histories in the DNA of human cells, allowing them to retrieve “ memories ” of past events, such as inflammation, by sequencing the DNA.

This analog memory storage system – the initial that may record the duration and/or intensity of events in human cells – may possibly also help scientists study how cells differentiate into various tissues during embryonic advancement, how cells experience environmental conditions, and how they undergo genetic changes that result in disease.

“To enable a deeper knowledge of biology, we engineered human being cells that can report by themselves history predicated on genetically encoded recorders, ” said Timothy Lu, an MIT associate professor of electrical pc and engineering science, and of biological engineering. This technology should present insights into how gene regulation and other events within cells contribute to disease and development, he added.

Analog Memory:

Many scientists, including Lu, have devised ways to record digital information in living cells. Using enzymes called recombinases, they program cells to flip sections of their DNA when a particular event occurs, such as exposure to a particular chemical. However , that method reveals only whether the event occurred, not how much exposure there was or how long it lasted.

Lu and other researchers have previously devised ways to record that kind of analog information in bacteria, but until now, it’s been achieved by nobody in human cells.

The new MIT approach is founded on the genome-editing system referred to as CRISPR, which includes a DNA-cutting enzyme called Cas9 and a brief RNA strand that guides the enzyme to a particular section of the genome, directing Cas9 where you can make its cut.

CRISPR can be used for gene editing widely, but the MIT team made a decision to adapt it for memory space storage. In germs, where CRISPR originally evolved, the operational system records past viral infections in order that cells can recognize and fight off invading viruses.

“We wished to adapt the CRISPR program to store information in the human genome, ” said Perli.

When using CRISPR to edit genes, researchers create RNA guide strands that match a target sequence in the host organism’s genome. To encode memories, the MIT team took a different approach: They designed guide strands that recognize the DNA that encodes the very same guide strand, creating what they call “self-targeting guide RNA. ”

Led by this self-targeting guide RNA strand, Cas9 cuts the DNA encoding the guide strand, generating a mutation that becomes a permanent record of the event. That DNA sequence, once mutated, generates a new tutorial strand that directs Cas9 to the recently mutated DNA RNA, allowing further mutations to build up provided that Cas9 is energetic or the self-targeting tutorial RNA is expressed.

By using sensors for particular biological events to modify Cas9 or self-targeting tutorial RNA activity, this operational program enables progressive mutations that accumulate as a function of these biological inputs, providing genomically encoded storage thus.

For example , the scientists engineered a gene circuit that only expresses Cas9 in the current presence of a target molecule, such as TNF-alpha, which is produced by immune cells during inflammation. Whenever TNF- alpha is present, Cas9 cuts the DNA encoding the manual sequence, generating mutations. The longer the exposure to TNF-alpha or the greater the TNF-alpha concentration, the more mutations accumulate in the DNA sequence.

By sequencing the DNA later on, researchers can determine how much exposure there was.

“ This is the rich analog behavior that we are looking for, where, as you increase the amount or duration of TNF-alpha, you get increases in the amount of mutations, ” said Perli.

“Moreover, we wanted to test our bodies in living animals. Having the ability to record and extract details from live tissue in mice might help answer meaningful biological queries, ” Cui said. The scientists showed that the operational program is with the capacity of recording inflammation in mice.

Most of the mutations bring about deletion of area of the DNA sequence, therefore the researchers designed their RNA tutorial strands to be longer when compared to a 20 nucleotides, so they won’t become too short to function. Sequences of 40 nucleotides are more than long enough to record for a month, and the researchers have also designed 70-nucleotide sequences that could be used to record biological signals for even longer.

Tracking Development and Disease:

The researchers also showed that they could engineer cells to detect and record more than one input, by producing multiple self-targeting RNA manual strands in exactly the same cell. Each RNA tutorial is associated with a specific insight and is produced when that input exists. In this study, the researchers showed that the presence could possibly be recorded by them of both antibiotic doxycycline and a molecule referred to as IPTG.

Currently this method is most likely to be used for studies of human cells, tissues, or engineered organs, the researchers say. By programming tissues to record multiple events, scientists could use this system to monitor inflammation or illness, or to monitor cancer progression. It could also be useful for tracing how tissues specialize into different tissues during development of creatures from embryos to adults.

“With this technology you could have different memory registers that are recording exposures to different signals, and you also could see that all of those signals was received by the cell because of this passage of time or at that intensity, ” Perli said. “That real method you could get nearer to understanding what’s taking place in development. ”

You like this Post, read about programs for 3d modeling

NASA Competition to Develop Dexterous Humanoid Robots for Mars

post hu foto1
NASA’s Robonaut, R5. (Image courtesy of NASA. )

NASA and global consultancy business NineSigma have announced the beginning of a competition to “develop humanoid robots to help astronauts on Mars. ”

The million dollar competition, aptly named the Space Robotics Challenge, aims to create a framework for a humanoid robot that is flexible, dexterous and may withstand the brutal Martian conditions.

To take home the $1M prize, teams will be required to program a new virtual robot modeled after NASA’s Robonaut R5. The computer programs compiled by participants will have to guide the R5 by way of a series of tasks and also achieve this with a forced latency time period imposed on communication between plan and robot.

NASA says this latency represents enough time it would take for instructions to be sent from Earth to Mars-approximately 20 minutes on average, depending on the distance between the two planets.

While NASA’s clever latency trap shouldn’t prove to be a huge stumbling block for programmers, the obstacles that they’ll have to face might be a bit of challenge. NASA’s vision for the task is a horrific one.

Each participant will undoubtedly be asked to steer their virtual R5 by way of a Martian hellscape in which a dust storm has just damaged a habitat (no word on whether astronauts were inside, or if some of them survived). Surveying the harm, the R5 shall need to align an off-kilter communications dish, repair a broken solar array and repair the habitat’s breached hull.

“Precise and dexterous robotics, in a position to utilize a communications delay, could be found in surface and spaceflight missions to Mars and elsewhere for hazardous and complicated tasks, which will be essential to assistance our astronauts, ” said Monsi Roman, program supervisor of NASA’s Centennial Challenges.

According to NASA, the advancement of flexible, dexterous robotic technologies will be crucial for sustaining human life away world. In fact , engineers at the agency are planning of methods to deploy these bots already, including delivering them to the reddish colored planet to select landing sites, set up habitats, construct life-support systems and possibly even conduct scientific missions.

You like this Post, read about program for 3d printing

Flexible Concrete Won’t Crack Under Pressure

post flex foto1
(Image thanks to Nanyang Technological University. )

The ancient building materials concrete gets a performance boost because of a clever reformulation.

Sand, water, gravel and cement. Those are the things that make up concrete, probably the most ubiquitous building components on earth. Since its invention millennia back concrete has served because the foundation for structures, roadways and all types of infrastructure. Although it’s a good material, it can have its flaws. Specifically, concrete is brittle and can crack under pressure.

For two thousand yrs that’s been concrete’s Achilles’ heel. But things may be changing.

In accordance with Nanyang Technological University (NTU) professor Chi Jian, “We developed a new type of concrete that can greatly reduce the thickness and weight of precast pavement slabs, hence enabling speedy plug-and-play installation, where new concrete slabs prepared off-site can easily replace worn out ones. ”

Named ConFlexPave this reformulated concrete holds true to the age old recipe but adds a twist by including polymer microfibers to the cocktail. The introduction of these polymers means that loads which would traditionally cause concrete to crack can be distributed across a larger area of the material, giving ConFlexPave greater resiliency.

post flex foto2
(Image courtesy of Nanyang Technological University. )

“The microfibers, which are thinner than the width of a human hair, distribute the load across the whole slab” said Assistant professor Yanf En-Hua. “[Thus] producing a concrete that’s tough as metallic and at least doubly strong as regular concrete under bending. ”

While table-sized slabs of ConFlexPave are actually reliable in laboratory configurations, NTU researchers will continue steadily to scale up the quantity of ConFlexPave they pour to be able to concur that the material will work as expected once it’s released in to the real world.

Though flexible concrete might seem like a mundane technical advance, the impact that it could possess on global infrastructure can’t be overstated. If flexible concrete can broad be poured far and, billions, or even trillions of bucks in infrastructure maintenance could possibly be saved.

What’s more, because versatile concrete could be poured in slimmer layers, less material will undoubtedly be needed to repave roadways, thus saving money and energy. Concrete building might also be made more resistant to cracking under the pressure of earthquakes as well. The list of benefits goes on and on.

You like this Post, read about online 3d design tool

Volvo and Uber Team Up for Self-Driving Cars

post volvo foto1
(Image courtesy of Volvo. )

Volvo Uber and Vehicles have announced that they can join forces to build up next-generation autonomous cars.

Both companies have signed an agreement to determine a joint project which will develop new base vehicles which will be able to incorporate the most recent developments in autonomous driving technologies, up to fully autonomous driverless cars.

The base vehicles will undoubtedly be manufactured by Volvo and purchased from Volvo by Uber then. Volvo and Uber are usually contributing a combined USD $300 million to the project.

Both Uber and Volvo use exactly the same base vehicle for another stage of these own autonomous car strategies. This can involve Uber adding its self-developed autonomous driving techniques to the Volvo base automobile. Volvo will use exactly the same base automobile for another stage of its autonomous car strategy, that will involve autonomous driving fully.

The Volvo-Uber project marks a substantial step in the automotive business with a car manufacturer joining forces with a new Silicon Valley-based entrant to the car industry, underlining the way in which the global automotive industry is evolving in response to the advent of new technologies. The beginning is marked by the alliance of what both companies view as a longer term industrial partnership.

The new base vehicle will undoubtedly be created on Volvo’s fully modular Scalable Product Architecture (SPA). SPA happens to be applied to Volvo’s XC90 SUV and also the S90 superior sedan and V90 superior estate.

SPA has been developed within Volvo’s $11-billion global industrial transformation program, which were only available in 2010, and contains been prepared from the outset for the most recent autonomous drive technologies along with next generation electrification and connection developments.

The development work will undoubtedly be conducted by Volvo Uber and engineers in close collaboration. This project shall enhance the scalability of the SPA system to add all needed safety, redundancy and new features necessary to have autonomous automobiles on the road.

Travis Kalanick, Uber’s leader, said: “Over one million people die in car accidents every year. These are tragedies that self-driving technology can help solve, but we can’t do this by yourself. That’s why our partnership with a great manufacturer like Volvo is so important. By combining the capabilities of Uber and Volvo we will get to the future faster, together. ”

You like this Post, read about most popular cad software

New Phononics Research Aims to Change How Sound Waves Behave

post pho foto1
This experimental laser ultrasonic setup in collaborator Nick Boechler’s lab will create phonons with nature-defying characteristics. ( Image courtesy of Nicholas Boechler. )

For decades, advances in optics and electronic devices have powered progress in information technology, energy and biomedicine. Now researchers are pioneering a new field — phononics, the science of sound — with repercussions potentially just as profound.

“If engineers can get acoustic waves to travel in unnatural ways, because they are starting related to light waves, the world could look and sound various radically, ” said Pierre Deymier, University of Arizona (UA) professor and mind of materials research and engineering.

Imagine a wall that enables you to whisper to an individual on the other hand but does not enable you to hear that person. Or perhaps a Band-Aid that images cells through the vibrations it emits. Or perhaps a personal computer that uses phonons, a kind of particle that carries high temperature and sound, to store, process and transportation information with techniques unimaginable with conventional electronics.

“It may appear to be weird science, nonetheless it is believed by me may be the wave of the future, ” Deymier said.

Breaking the Laws associated with Waves:

In common logic, the idea of reciprocity says that waves, such as for example electromagnetic, gentle and acoustic waves, behave exactly the same no matter their direction of travel. It is a symmetrical process — unless there is a material barrier that breaks that symmetry.

There often is. Sound and lighting waves lose power when encountering a wall, for example , and may try to reverse program. The nine NewLAW projects try to break this symmetry of light and sound waves by making them travel in only one direction. So, when encountering a wall, a sound wave might continue around it, or even be completely absorbed by it.

Other researchers have created advanced materials that bend light in unnatural ways to render parts of an object invisible. Similarly, Deymier’s research could lead to walls that allow sound to pass more easily in one direction, or objects that remain silent when approaching from one direction.

The Power of Phonons:

Most modern technologies are based on the manipulation of photons and electrons. Deymier is among the pioneers in the emerging self-discipline of phononics, which encompasses numerous disciplines, like quantum mechanics and physics, materials technology engineering and applied mathematics.

He’s got developed specialized phononic crystals, elastic and artificial structures with unusual acoustic wave propagation abilities, such as the capability to increase the quality of ultrasound imaging with super lenses, or even to process info with sound-based circuits.

For the brand new NSF-funded study, he could be using advanced materials, chalcogenide glass, which includes mechanical properties which can be dynamically modulated in area and time and energy to break reciprocity and transmit sound in one direction.

This type of investigation could ultimately create a vast selection of products with peculiar features which could improve noise abatement, ultrasonic imaging and information processing technologies, Deymier said.

“Imagine a computer whose operation relies on processing information transported by sound through non-reciprocal phononic elements instead of electrical diodes, or a medical ultrasonic imaging device with extraordinary resolution. ”

” Working with phonons is incredibly exciting, ” Deymier said. ” We’re going to change the way people think about sound and are opening an entire new world. ”

Deymier has received $1. 9 million from the National Science Foundation’s Emerging Frontiers in Research and Innovation, or EFRI, program to lead a four-year study on manipulating how sound waves behave. His collaborators are UA professor of materials engineering and technology Pierre Lucas and Nicholas Boechler, associate professor of mechanical engineering at the University of Washington.

You like this Post, read about modeling software for 3d printing

Revisiting Technology to Keep Astronauts on Their Feet

If you’ve in no way watched astronauts tripping over rocks on the moon, enough time should be taken by one to do so.

Then consider the threat of a suit puncture occurring while an astronaut trips more than rocks on the moon, also it becomes a little less entertaining and much more concerning considerably.

In order to help these clumsy others and walkers here on terra firma, scientists at MIT are developing special shoes that may be built-into a navigation system to greatly help the wearer avoid obstacles to mobility.

post astro foto1
Avoid obstacles by hearing the sole-of your shoes, that’s. (Image courtesy Jose-Luis Olivares/MIT. )

This is far from a fresh concept. Haptic feedback in sneakers has been around for years, but the team at MIT has taken a different approach, going back to the drawing board to determine the best way to implement this sort of technology.

By researching the areas of the foot that are most sensitive to the comments motors, Leia Stirling, an assistant professor at MIT’s Department of Aeronautics and Astronautics (AeroAstro), whose group led the work, took the technology back to basics.

“A lot of students in my lab are looking at this question of how you map wearable sensor information to a visual display, or a tactile display, or an auditory display, in a way that can be understood by a nonexpert in sensor technologies, ” said Stirling. “This preliminary pilot study permitted Alison [Gibson, a graduate pupil in AeroAstro and first writer on the paper] to understand about how she could develop a language for that mapping. ”

The research shows that not merely are some certain specific areas of the foot much less receptive to the feedback, but also that folks had difficulty attending to the stimuli or identifying differences in feedback intensity while distracted.

“Trying to provide people who have more information concerning the environment, especially when not merely vision but various other sensory information-auditory in addition to proprioception-is compromised, may be beneficial really, ” said Shirley Rietdyk, a professor of Kinesiology and Wellness at Purdue University who research the neurology and biomechanics of drops.

“From my perspective, [this function could be useful] not merely for astronauts but also for firemen, who’ve well-documented issues getting together with their environment, and for those who have compromised sensory systems, such as for example older adults and folks with diseases and disorders. ”

The work could directly apply to other navigation systems for the differently abled, such as MIT’s virtual “guide dog” 3D camera system. This integration and the variety of output methods would allow people at any ability level to navigate as quickly as anyone else.

You like this Post, read about modeling for 3d printing

Detangling the Complexity of Waves with Acoustic Voxels

post waves foto1
Columbia Engineering researchers were able to control the acoustic reaction of an object when it’s tapped and thereby tag the thing acoustically. Given three items with identical styles, a smartphone can browse the acoustic tags instantly by recording and examining the tapping audio and thereby identify each item. (Image thanks to Changxi Zheng/Columbia Engineering. )

A novel solution to simplify the design of acoustic filters has been developed in a collaborative work among engineering experts via simulation methods.

The engineering research team behind this advancement decided to study a fairly simple shape (a hollow cube with holes on a few of its six faces) to be able to enable 3D printing it as their base module. This fresh technique is with the capacity of determining optimum filter styles, which enables the selective reduced amount of sounds at specific frequencies then.

This process has been named “Acoustic Voxels” by its creators. Acoustic Voxels supports veering from using trial-and- mistake iterations in the design of acoustic filters. Instead, this scheduled program precomputes the acoustic properties of something. It also enables an individual to simulate the filter with varying properties.

Additionally, the engineering research team behind Acoustic Voxels created a technique for computationally optimizing attachments between filters in order to achieve a desired effect. Acoustic Voxels operates 70, 000 times faster than current algorithms used to predict acoustic qualities.

An interesting outcome of Acoustic Voxels was that the team could design acoustic tags into objects that seemed to be the exact same as each other. However , when tapped, each object would give a distinctive sound. Even though frequencies affected demonstrate significant reliance on the form of the cavity often, the exact influence of the shape is complex and difficult to understand.

Acoustic Voxels not only sped up and computationally optimized the design process, it also enabled the design of more advanced geometries. Current computational tools are limited to more simplistic shapes.

When waves are transmitted by way of a cavity, many of them forth are usually reflected back and. These reflected waves either overall create a constructive superposition, which amplifies the audio, or destructive superposition, which muffles the audio. This is one way acoustic filters operate.

Wojciech Matusik, associate professor of electrical engineering and computer research at the MIT Computer Research and Artificial Cleverness Laboratory explained the existing state of this study: So far, the method is mostly suitable for controlling impedance and transmission loss at discrete frequencies, such as in traditional muffler design.

However , the scope of this study only covered one shape of a single material. “Extending our method to additional materials and shapes could offer a larger palette for better acoustic filtering control, ” said Matusik.

The engineering research team behind Acoustic Voxels was a blended, collaborative group. It was made up of members from Disney Analysis, the Massachusetts Institute of Columbia and Technology University. The National supported this growth Science Foundation.

You like this Post, read about low cost cad software

Transparent Wood Windows are Cooler than Glass

post glass foto1
(Image courtesy of the University of Maryland. )

Engineers have demonstrated that home windows manufactured from transparent wood could provide a lot more even and consistent natural light and better energy performance than glass.

In a document published in the journal Advanced Energy Materials just, the united team, headed by Liangbing Hu of the University of Maryland’s department of materials research and engineering, lay out study showing that their transparent wood offers better thermal insulation and enables in nearly as much light as glass, while eliminating glare and providing uniform and consistent indoor lighting. The findings advance earlier published work on their development of transparent wood.

“The transparent wood lets through a little bit less light than glass just, but a whole lot less heat, ” said Tian Li, the prospect writer of the new study. ” It is extremely transparent, but still permits a small amount of privacy because it isn’t completely see-through. We furthermore learned that the stations in the wood transmit lighting with wavelengths around the selection of the wavelengths of noticeable light, but that it blocks the wavelengths that carry mainly heat, ” said Li.

The team’s findings were derived, in part, from tests on a tiny model house with a transparent wood panel in the ceiling that the team built. The tests showed that the light was more evenly distributed around a space with a transparent wood roof than a glass roof.

The channels in the wood direct visible light straight through the material, but its cell structure bounces the light around just a little bit, a property called haze. This means the light does not shine directly into your eyes, making it more comfortable to look at. The group photographed the transparent wood’s cell framework in the University of Maryland’s Advanced Imaging and Microscopy (AIM) Lab.

Transparent wood even now has all of the cell structures that comprised the initial piece of wood. The real wood is cut contrary to the grain, so the channels that drew drinking water and nutrition up from the roots lie across the shortest dimension of the windowpane. The new transparent real wood uses theses natural stations to steer the sunlight through the real wood.

Because the sun passes over a residence with glass windows, the angle at which light shines through the glass changes as the sun moves. With windows or panels made of transparent wood instead of glass, as the sun moves across the sky, the channels in the wood direct the sunlight in the same way every time.

“This means your cat would not have to get up out of its nice patch of sunlight every few minutes and move over, ” Li said. “The sunshine would stay static in the same place. Furthermore, the room will be more equally lighted all the time. ”

Working with transparent wood is similar to working with natural wood, the researchers said. Nevertheless, their transparent wood is water-proof because of its polymer component. It furthermore is much less breakable than glass as the cell structure inside resists shattering.

The research team has recently patented their process for making transparent wood. The process starts with bleaching from the solid wood all of the lignin, which is a component in the wood that makes it both brown and strong. The wood can be soaked in epoxy, which adds strength back and makes the wood clearer. The united group has used small squares of linden timber about 2 cm x 2 cm, but they have mentioned that the timber can be any size.

You like this Post, read about low cost 3d printer

New Audi Shock Absorber System Generates Electricity from Kinetic Energy

post newa foto1
(Image courtesy of Audi. )

The recuperation of energy plays an increasingly important role in transportation, including in a car’s suspension. Audi is focusing on a prototype known as “eROT, today ” where electromechanical rotary dampers replace the hydraulic dampers used.

The principle behind eROT is easily described: “Every pothole, every bump, every curve induces kinetic energy in the electric motor car. Today’s dampers absorb this power, which is lost by means of high temperature, ” mentioned Dr . -Ing. Stefan Knirsch, board member for technical growth at AUDI AG. “With the brand new electromechanical damper program in the 48-volt electric system, this energy is put by us to utilize. ”

The eROT system is designed to respond sufficient reason for minimal inertia quickly. Being an controlled suspension actively, it adapts to irregularities in the street surface area and the driver’s driving design. A damper characteristic that’s virtually freely definable via software escalates the functional scope.

It eliminates the mutual dependence of the rebound and compression strokes that limits conventional hydraulic dampers. With eROT, Audi configures the compression stroke to be smooth without compromising the taut damping of the rebound stroke.

The eROT system enables a second function besides the freely programmable damper characteristic: It can convert the kinetic energy during compression and rebound into electricity. To do this, a lever arm absorbs the motion of the wheel carrier. The lever arm transmits this pressure via a series of gears to an electric motor, which converts it into electric power.

The recuperation output is 100 to 150 watts normally during testing on German roads – from 3 watts on a freshly paved freeway to 613 watts on a rough secondary road. Under customer driving problems, this corresponds to a CO2 savings of up to three grams per kilometer (4. 8 g/mi).

The new eROT technology is based on a high-output 48-volt electrical system. As currently configured, its lithium-ion battery offers an energy capacity of 0. 5 kilowatt hours and peak output of 13 kilowatts. A DC converter connects the 48-volt electrical subsystem to the 12-volt primary electrical system, which includes a high-efficiency, enhanced output generator.

Audi reports that initial test results for the eROT technology are promising, this means its use in upcoming Audi production models is plausible certainly. A prerequisite for this may be the 48-volt electrical program, a central element of Audi’s electrification strategy.

In the next version prepared for 2017, the 48-volt system will serve because the principal electrical system in a fresh Audi model and feed a high-performance mild hybrid drive. Based on the ongoing company, it shall offer potential gasoline savings as high as 0. 7 liters per 100 kilometers.

You like this Post, read about low cost 3d cad software

SCUBAJET – A Watersports Jet Engine

When Patrizia Giovanniello became a mother or father she scaled back her drinking water activities on Lake Constance within Switzerland. With her boyfriend and girl she enjoyed period on the operate paddleboard (SUP) but was concerned about unpredictable weather and currents stranding her family far from shore. Armin, her boyfriend, and his father found a solution to this problem by developing Scubajet, the flexible jet engine for water sports.

Scubajet can connect to stand up paddleboards, small dinghies, canoes, kayaks or divers. The engine can achieve a speed of up to six knots, runs for 1 . 5 hours on a single battery charge, and the campaign page says that the device is completely free of emissions. The engine says that it can run up to 1 . 5 kiloWatts.

post scuba foto1

Several adapters are available to connect the Scubajet to boards, dinghies, or kayaks. Starboard, Simmer Style, SIC, JP-Australia, Sevylor, Mistral, RRD, Fanatic, Hobie, Red Paddle Co, and Naish have all partnered with the company to confirm that their current equipment could be installed with a Scubajet. The marketing campaign page says that testing has been done on diving devices to develop adapters which will give divers some additional propulsion power.

The machine itself is 25 centimeters longer, 80 centimeters wide and weighs 2 . 4 kilograms. The operational system was designed to easily fit into a backpack when not used, but videos on the marketing campaign page shows the machine strapped close to an user’s backpack. The engine is told by an auto-shutoff to avoid right away if an individual falls into the water. The Scubajet’s remote provides capability to start, stop, and transformation the unit’s speed.

I’m looking at Scubajet with a wholesome skepticism. There’s a little bit of culture distinction in the specs but I’m convenient knowing an engine’s horsepower when compared to a wattage or max swiftness callout. The idea of a propulsion system that’s much more compact and practical than an outboard motor is great, and the adapter system looks elegant and seamless on all of the demonstration gifs on the strategy page. The campaign will be funded on September 1 if its €150, 000 goal is met, and units will then ship in December, 2016.

post scuba foto2

You like this Post, read about low cost 3d cad

NASA Asteroid-Capture Technology Passes Major Test

post tect foto1
A demonstration of the ARM setup. Satellite, robotics and solar propulsion engine separately sold. (Image thanks to NASA. )

NASA’s Asteroid Redirect Objective (ARM) has passed a significant program review (Key Choice Point-B), paving the true way for among NASA’s many ambitious missions in recent background.

Within the last 5 years, the thought of space mining and asteroid collection has transformed from being rooted in science fiction to learning to be a reality. Although there are always a true number of personal enterprises on the search for the untold riches hidden on the list of stars, NASA in addition has showed in fascination with developing the technology necessary to capture asteroids and properly and precisely maneuver them through space.

That’s the purpose of ARM precisely.

According to NASA, ARM is a robotic mission that will “visit a near- Earth asteroid, collect a multi-ton boulder from its surface, and redirect it into a stable orbit around the moon. ” Once secured in its orbit around the moon astronauts will explore the captured rock and return samples of the alien soil to Earth for study.

(Where these astronauts might come from hasn’t been made clear by NASA. Presumably, they’d be shuttled to lunar orbit from Earth or the ISS, but wouldn’t it be more interesting if they were living in a Moon colony? )

Though the ARM is still in the very early stages of development (NASA hasn’t even selected what asteroid it might pluck off its path) the Agency has also stated that a number of companion technologies will be tested during the ARM project. Among these technologies are a high-power, high throughput solar electric propulsion system, advanced robotics for capturing an asteroid and “ advanced autonomous high-speed proximity operations at a low-gravity planetary body ” (translation, NASA’s going to demonstrate that a tractor beam is really a thing).

As of this writing, Inside December of 2021 nasa provides stated that it expects the robotic part of the ARM task to launch. Five years later, astronauts will be slated to inspect the asteroid since it orbits around the Moon.

You like this Post, read about list of cad software

China Launches the First Quantum Communication Satellite

post first foto1
(Image courtesy of Xinhua. )

China has launched the world’s first quantum communications satellite in a bid to create an impenetrable wall around its communications.

Over the last decade, the Chinese National Space Agency has made great strides in modernizing the country’s access to space and this latest launch is further proof that the world’s largest nation has the technological capacity to build and deploy powerful communication technology.

Named Quantum Experiments at Space Scale (QUESS), the satellite shall be used to secure communications between Beijing and the high-crime capital of Xinhua province, Urumqi.

To attain secure communication QUESS use the spooky interaction referred to as quantum entanglement to secure each little bit pinged off the satellite. The true way that works is pretty brain bending.

How Quantum Entanglement Works:

Quantum entanglement dictates that several particles can be brought together with each other by entangling their quantum claims. Once entanglement occurs, non-e of the contaminants inside that ensnared state could be distinguished in one another. The kicker will be that when an outside observer wished to verify or take notice of the entangled state, the act of observation would cause the constant state to collapse.

In quantum communication, the truth that entangled contaminants collapse when observed helps make them an ideal fit for an encryption crucial. If a would-be snoop needed to try to crack a quantum encryption essential it would be theoretically impossible to do so. Therefore , if a satellite possessed a quantum key communicator, a quantum entanglement emitter and a quantum entanglement source, and another station had the same, the two locations should be able to communicate without fear of interception.

But is this technology really ready for the prime time? Well, not just yet, but with the launch of QUESS, secure quantum communications may have taken a big step forward.

Securing Satellite Communications:

According the Xinhua news agency, the official press agency of the People’s Republic of China, “In its two-year mission, QUESS is designed to establish ‘hack- proof ‘ quantum communications by transmitting uncrackable keys from space to the ground”.

Xinhua continued, “Quantum communication boasts ultra-high security as a quantum photon can neither be separated nor duplicated… It is hence impossible to wiretap, intercept or crack the information transmitted through it. ”

With the growing threat and intensity of cyberwarfare ratcheting up every year, it seems sensible that China, and likely other nations, took steps protect their soundest communications. While China has mentioned that QUESS can be an peaceful little bit of technology entirely, there is one unequivocal proven fact that could be gleaned from the start: China has turned into a major participant in the quantum communications and room game.

You like this Post, read about list of cad programs

Understanding Augmented Reality Headsets

Augmented reality may be a bit more promising than virtual reality for commercial engineering applications given its basic difference-it enables you to layer electronic information directly on the surface of the physical “data, reality or ”. It’s important to remember that the nascent augmented reality market has not proven itself to be a reliable commodity for engineers. For media and entertainment, it’s impossible not to notice the success and popularity of Pokémon Go, the augmented reality game from Nintendo. Engineering applications are in short supply but they do exist.

In this post, we’ll cover a cross- section of augmented reality headsets and focus on the ones that have the most promise for engineering apps, such as training, maintenance, collaboration and visualization.

Differentiating Augmented Reality Products:

Augmented reality could be experienced on mobile devices such as a smartphone and tablet. Additionally, there are augmented reality headsets referred to as head-mounted shows (HMDs), eyeglasses, visors, helmets and also a pair of augmented reality contact lenses.

Truly Immersive Augmented Reality Requires a Big Headset:

One of the most interesting issues with making immersive augmented reality is the amount of physical property it requires from an individual. There exists a direct ratio that will require the quantity of optics to improve as the desired display dimension and field of view boosts. With a concise wearable like Google Glass, for example , the widest field of view (FoV) you’ll achieve is just about 20 to 30 degrees. Google Glass is 13 something and degrees like the Epson MoverioBT-2000 gets around 23 degrees.

That is basically why headsets yield a far more immersive experience.

Augmented Reality Terminology:

Many of the terms such as FoV, latency, frame rate and refresh rate are similar to those you need to familiarize yourself with in order to understand virtual reality, which you can see here, in a previous post I wrote called “ Knowing Virtual Reality Headsets. ”

Virtual retinal display (VRD), that is particular to AR, beams a raster projection directly onto an user’s irises. The result is comparable to seeing a display before your eyes directly, much like some type of computer or television screen. The effectiveness of VRDs has improved with the development of LED technology greatly, allowing users to find them during hours of sunlight even.

Summary of Augmented Reality Applications:

Augmented reality has been found in a variety of novel ways, across many different fields and disciplines, including archaeology, construction, medicine, emergency management, industrial design and the military.

The first three headsets featured here have the most potential uses for engineers. Afterwards, I’ll briefly explore a cross-section of augmented reality headsets and glasses with commercial and enterprise applications and possible.

1) DAQRI: The Wise and Safe Helmet:

The DAQRI Wise Helmet (DSH) is really a combination safety helmet and augmented reality headset that overlays virtual instructions, safety information, training and visual mapping over specific reality information. Workers in gas and essential oil, automation and producing sectors who need to understand or follow complex instructions to perform complex processes can look through the DSH and see digital information overlaid on a variety of different contexts-whether it is a Siemens controller, scanning high quality or device control equipment intended for metrology purposes.

post un foto1
The DSH overlays electronic instructions over equipment in adjusts and realtime to the movements of the workers. (Image thanks to DAQRI. )

The helmet comes with its own battery and docking station and weighs as much as any typical industrial hardhat. The DSH varies widely in price, fetching anywhere from $5, 000 to $15, 000, because the features need to be custom built.

Powered by the sixth-era Intel Core m7 RealSense plus processor scanning technology, the DSH may be the first functional and useful HMD that uses augmented actuality to greatly help human workers perform challenging tasks.

The DSH’s face shield and injection-molded plastic helmet component are ANSI-compliant. The inner area of the helmet’s shell will be a mix of cast aluminum and carbon fiber composite.

post un foto2
Thermal PoV through the DSH. (Image courtesy of DAQRI. )

DAQRI’s multiple cameras work together to make this the first fully industrial augmented reality headset. If features a13-megapixel HD video camera to capture videos and photos, track objects and recognize 2D targets and colors. Intel’s RealSense technology has two infrared cameras built-in, and DAQRI integrates them having an infrared laser projector that may sense depth by calculating deflected infrared lighting. A low-resolution camcorder is integrated having an industrial-grade inertial measurement device (IMU), that allows the helmet to compute its relative place in space in real time via a combination of gyroscopes and accelerometers. Another high-quality IMU is available for additional applications. For sound, there are four microphones, volume and power buttons and an output jack for headphones.

Workers putting on the DSH can easily see augmented instructions that shift relative to their actual environment. The employee shall be in a position to look at a device with 100 readouts, and the DSH shall pull their focus on a pressure gauge, for example , that’s reading too much or too reduced. The DSH’s infrared cams can constantly monitor apparatus by overlaying normal thermal data and current thermal data to make distinctions and judgements on the fly. Workers equipped with the DSH can visually scan for out-of-tolerance thermal anomalies that may put an operation in danger.

post un foto3
The DSH’s face shield and the hard helmet itself are ANSI compliant. The outer shell is injection-molded plastic. ( Image courtesy of DAQRI. )

The DSH was used in a case study with Hyperloop in a way that illustrates the collaborative power when used between workers of a large and widely dispersed manufacturing facility. A novice operator was utilizing a robotic welder for specific spot welding in design. A more encountered operator could tune in to the networked DSH of these less-experienced counterpart, assess what these were doing and relay correct directions immediately.

This means that a company could buy a custom-built series of DSHs, scale up operations with less-experienced (less-expensive) workers and have several experts remotely monitor and guide all of them the way to production.

In accordance with DAQRI, the DSH can be acquired for purchase by Q1 2016 to its top-tier customers.

2)Metavision’s Meta 2:

post un foto4
The Meta 2 by Metavision includes a 2560×1440 FoV and display. (Image thanks to Metavision. )

The Meta 2 can be an augmented reality headset from Metavision with several features which are promising for potential industrial uses, like a wide FoV. Weighed against virtual reality, less FoV isn’t desirable but not susceptible to the exact same distraction as a little FoV in an augmented reality headset. In virtual reality, whatever isn’t in the FoV (which contains 3D models of various polygonal sizes) is surrounded by pitch-black darkness. In augmented fact, a low FoV equals a tiny translucent digital window with 3D content surrounded by the real world of physical data that certain would see with out a headset.

The FoV on the initial Meta was 25 to 35 degrees, that is small when compared to average virtual reality FoV. The Meta 2 includes a 90-degree FoV, that is a tremendous breakthrough, particularly when considering industrial programs like training, maintenance or manufacturing. There is a trade off that allows this wide FoV. Like its predecessor, the Meta 2 is tethered. Connection to a workstation limits all kinds of training applications and limits use on a factory ground for assembly or maintenance. If the Meta is compared by you 2 to an augmented reality headset like the Microsoft HoloLens, which is untethered, you realize immediately that the Meta 2 reaches a disadvantage for useful uses. But it has to end up being interpreted as a long-term design strategy for both Metavision and Microsoft. Microsoft believes it can advance its untethered AR headset through designers planning for a wireless hardware device, and Metavision is planning to develop the technology and untether before a consumer version is in demand. It is important to keep in mind that both the HoloLens and Meta 2 are usually basically developer kits rather than full-fledged consumer products.

Meta takes full benefit of the continuing miniaturization and democratization of inexpensive sensors paired with a new high-definition camera to compute the hands inside the context of the digital and physical atmosphere they exist inside through the headset. The hand-tracking handles of the Meta 2 are not as sophisticated as the Leap Motion Orion controllers, but the notion of separate hardware for hand-tracking may be going the way of the dodo in favor of eye-tracking technology, though this is debatable. Preorders of the Meta 2 developer kit are available right now for $949, and Metavision says the gadgets will ship in Q3 2016.

It’s understood at this time that the possible killer engineering or even industrial app for augmented truth headsets like the powerful Meta 2 are still to come.

3)Microsoft HoloLens:

Microsoft HoloLens is an augmented reality headset that was developed under the code name ProjectBarrio. It is also known as a “mixed reality ” headset, or holographic computer. “ Mixed reality” is really a term that’s gaining momentum in the press and may also be used to describe headphones that may switch from virtual reality setting to augmented reality mode. Miracle Leap, the mysterious startup without products but main investments led by Search engines and Alibaba, has pushed for this linguistic distinction in particular.

post un foto5
Microsoft HoloLens costs $3, 000 and is primarily for developers at this time. The advantage it has over the Meta 2 is that it is untethered, allowing for a relatively huge degree of freedom. (Image thanks to Microsoft. )

Besides semantics, the HoloLens descends from the movement detection and scanning technology hardware referred to as the Microsoft Kinect, that was released this year 2010. Microsoft uses the word hologram to spell it out the digital information that’s overlaid on the physical planet (which you can look out of the visor). The wish is that headset holographic computing will eventually replace the screens (laptop, PC, mobile devices ) we use around the clock today.

The HoloLens features an accelerometer, magnetometer, gyroscope, four depth-sensing cameras, a light sensor, four microphones and a 2-megapixel camera. Aside from the typical GPU and CPU within nearly all computing devices, the HoloLens has something known as a Holographic Processing Unit furthermore, or HPU. The HPU is a sort of “grand central terminal” for all of the input from the various sensors.

Microsoft is also building the Windows 95 of augmented reality operating systems, called Windows Holographic, enabling manufacturers to spotlight developing the hardware rather than worry about the program, which, in theory, can help the development of augmented reality devices get to a tipping point with consumers and help augmented reality move mainstream.

4) A cross section evaluation of alternate augmented reality headphones. There are dozens of augmented reality headsets available today, and this random cross- area is meant to highlight several differences and similarities.

Google Glass: First we’ve Google Glass, that was discontinued after a year to be in the marketplace barely. Google Glass 2 . 0 is currently inside development, and Google is now showcasing enterprise and industrial applications for the headset. It has a heads-up display, a microphone, a CPU, a battery, a GPS, speakers, a microphone and a projector that overlays digital information onto an user’s view by beaming it through a visual prism that focuses the electronic information correct onto the retina.

Google is concentrating on enterprise use cases, want Boeing using them for cable harness assembly. The headsets make use of voice commands and a member of family part panel a la GeordiLa Forge from Star Trek, but they won’t assist you to with your vision, unfortunately.

R-7 Smart Glasses: The form factor of these glasses from OsterhoutDesign Group separates them from the pack of giant and boxy augmented reality headsets like Microsoft HoloLens and Meta 2 . They just kind of look like awkward, oversized sunglasses.

post un foto6
To control your virtual environment on the R-7 smart glasses, you may use a trackpad on the eyeglasses themselves or work with a paired controller. (Image thanks to Osterhaut Design Group. )
They run a custom made version of Android KitKat called ReticleOS, therefore you can run Android load and apps movies.

The R-7s have become light aswell, weighing about 2 . 5 lbs, that is about a pound less than the HoloLens.

Vuzix M300 Smart Glasses: This headset seems like a carbon copy of Google Glass, except it has slightly better resolution and also runs IOS. The 64 GB of internal storage isn’t all that interesting, but it’s partnership with Ubimax and use in the logistics industry is worth mentioning. DHL utilizes xPick on the sooner version of the smart glasses (Vuzix M100 Smart Glasses).

Ubimax produces the Business Wearable Computing Suite, that is a combined band of industrial augmented reality applications much like xPick, including xMake for manufacturing, xAssist for remote xInspect and help for maintenance.

Moverio Professional BT-2000: Epson’s first edition of this augmented actuality headset, the BT-100, premiered before Google Glass. This latest edition is certainly targeting enterprise customers specifically for remote viewing with its 5-megapixel camera, 3D mapping and gesture recognition capabilities.

Name Google Glass R-7 Smart Glasses Vuzix M300 Moverio Pro BT-2000

Company Google Osterhaut Design Group Epson Epson

Shipping Discontinued Yes Q3 2016 Yes

FoV (degrees) 15 30 20 23

Resolution 640 x 360 1280 x 720 960 x 540 960 x 540

Platform Android Android Android/iOS Android

Cost USD$1,500 USD$2,750 USD$1,499 USD$2,999

5) Magic Leap: News of this unicorn startup arrives wrapped in mysterious states of “light-field shows ” and “photonic chips” which are threatening to upend everything we realize about consumer-oriented headset kits just like the Meta 2 . This startup is named Magic Leap, and it’s elevated about $1. 5 billion in funding. The funding was brought by Google (which several speculate was a response to Facebook’s $2 billion purchase of Oculus) on the strength of supposedly ground-breaking technology in which a special light apparatus beams holographic images right onto your eyes.

post un foto7
The light-field displays could reduce the bulky and goofy industrial design that characterizes the majority of augmented reality headsets currently available. (Image courtesy of Magic Leap. )

Magic Leap comes last inside this overview as the promise is represented because of it, potential and global fascination with the continuing future of augmented reality as a fresh computing platform. This company has not released a product as of yet but promises to revolutionize the field of “mixed fact ” or augmented reality, or whatever you prefer to call it.

The potential uses for engineers is there, in the DSH particularly, but a standardized platform this is the exact carbon copy of the iPhone for augmented reality nevertheless remains elusive.

You like this Post, read about list of 3d software

Is NVIDIA’s Latest Graphics Board Too Good for You?

post nv foto1
The new Quadro P6000 may be the fastest NVIDIA graphics board ever. ( Image courtesy of NVIDIA. )

NVIDIA is moving fast. The new Pascal architecture has just begun shipping; last week, the new Titan X was launched based on the Pascal architecture; this week, the top-of-the- line Quadro boards were launched.

It is a faster introduction across products than NVIDIA did going back two major GPU releases: the Maxwell and Kepler architectures.

Alongside the Quadro P6000, NVIDIA announced the Quadro P5000. Listed below are the highlights:

The Quadro P6000 and P5000 derive from NVIDIA’s GP102 graphics processor

The Quadro P6000 has 24 GB of memory and 3840 compute unified gadget architecture (CUDA) cores- almost 300 more cores compared to the Titan X

For virtual actuality (VR) and 3D stereo program, simultaneous multi-projection permits right and left eyesight projections to be created within a geometry pass

Loaded with GDDR5X memory, which usually provides twice the bandwidth of the GDDR5 on the previous generation panel (the Quadro M6000) and is critical for GPU computing

Unified virtual memory, available on Linux, will accelerate GPU-computing problems with very large data sets

Dynamic load balancing of graphics and computing applications delivers better GPU-computing and graphics mixed-mode operations

Designed to accelerate GPU-based ray tracing, video rendering and high-end color grading

Boasts 8K display resolutions with support for DisplayPort 1 . 4

The basic characteristics of the two new Quadros are laid out below.

Feature Quadro P5000 Quadro P6000

GPU Pascal, GP102 Pascal, GP102

CUDA Cores 2560 3840

Memory 16 GB GDDR5X 24 GB GDDR5X

Display Outputs 4x DP 1 . 4 & 1x DVI 4x DP 1 . 4 & 1x DVI

Display Support 4 x 4K resolution at 120 Hz 4 x 4K resolution at 120 Hz

4 x 5K resolution at 60 Hz 4 x 5K resolution at 60 Hz

Available October 2016 October 2016

Pricing Not Available Not Available

It is worth pointing out that the Quadro P6000 is absolutely the fastest, most powerful graphics board in the NVIDIA family. Not only does it have 24 GB of GDDR5X memory, twice the memory of the Titan X, it also has 3840 CUDA cores compared to the Titan X with 3584 CUDA cores. There is absolutely no quicker GPU in the NVIDIA fall into line.

Another a key point for professionals is normally that the Quadro P6000 and P5000 unified memory architecture is ideal for compute applications with huge data sets running in Linux. The unified memory architecture allows tasks of unlimited size to be rendered and calculated. The unified storage architecture is possible on Linux. It generally does not exist under Windows.

Virtual reality is extremely popular in the buyer space. Professionals, however, have been fighting VR, stereoscopic 3D shows and the relevant technical troubles for over two decades. The new Quadro products use a technique called single-pass multi-projection. It allows the Quadro P5000 and P6000 to process the 3D scene one time and generate two perspective views: one for the right eye and one for the left vision. This doubles the overall performance for stereo projections which, in turn, doubles your budget for complexity and fidelity in the VR, 3D stereo image.

The Pascal architecture can dynamically balance graphics and computing work on the GPU. This enhances the Quadro’s capability to work interactively with reasonable, GPU-computed ray tracing in the application form viewport. Imagine Iray’s reasonable rendering in a Maya viewport for a quicker lighting and lighting workflow in visual results (VFX) scenes.

The professional world is moving beyond 4K. The brand new Quadro GPUs assistance DisplayPort 1 . 4 and 8K resolutions. Plus they support four simultaneous 5K resolution displays. Today that is clearly a boon to professionals in movie and special effects, and it will be for engineers, too. It is not hard to imagine 5K displays, currently running USD$1, 600, replacing 4K displays at the sub-USD$800 price point in the future.

post nv foto2
NVIDIA recommendations for Quadro models gives software applications quite a bit of headroom. (Image courtesy of NVIDIA. )

Users, especially those with budgets to consider, may elect to look at graphics hardware a level below NVIDIA recommendations. NVIDIA appears to have taken treatment to spec a cards for the charged power consumer for each software application, not wanting this type of user to be equipment constrained. But let’s look at a typical consumer, say for 3ds Max, would you 3D modeling but seldom, if ever, uses the particular Iray plugin for GPU ray rendering and tracing. For that user, Quadro K1200 might succeed and be preferred on the recommended quadro M4000.

Users should also determine if their unique application can function with or even make use of the NVIDIA hardware getting recommended. For example, ANSYS Fluent’s results are more accurate using double-precision floating point operations, but the GP102 GPUs in the Quadro P6000 recommended above are optimized for single-precision floating point operations. Heavy use of simulation solvers should be steered toward the new Tesla P100 GPUs, which are optimized for double-precision floating point computing and meant for graphics cards in HPC nodes in data centers rather than in desktop workstations.

A Final Word:

NVIDIA is upgrading the Quadro family members with the Pascal architecture faster than any architecture switch that I can remember.

The Pascal architecture is faster than any previous architecture.

The Quadro P6000 surpasses the raw performance of its consumer/gamer counterpart, the USD$1, 200 Titan X, including 300 additional CUDA cores and doubling the graphics storage nearly. In addition , the brand new architecture, coupled with NVIDIA’s excellent GPU-computing assistance for ray-tracing in addition to power to accelerate VR, create the Quadro P6000 and the P5000 worth taking into consideration as professional workstation images for all those doing rendering or developing /viewing VR content.

NVIDIA does not be prepared to ship the Quadro P6000 and the P5000 until October and contains not released any pricing. Bear in mind the NVIDIA Quadro M6000, which the P6000 replaces, was selling for an inexpensive of USD$4, 000.

You like this Post, read about intersection of two lines in 3d

Engineers Target Cancerous Tumors with Nanobots

post cancer foto1
(Image thanks to the Institute of Biomedical Engingeering, Polytechnique Montréal. )

Engineering researchers are suffering from new nanorobotic agents with the capacity of navigating through the bloodstream to manage a drug with precision simply by specifically targeting the dynamic cancerous cells of tumours.

In this manner of injecting medication ensures the perfect targeting of a tumour and avoids jeopardizing the integrity of organs and surrounding healthy tissues. As a total result, drug dosages that are highly toxic for the human organism could be significantly reduced.

This breakthrough is the result of research done on mice, which were successfully administered nanorobotic agents into colorectal tumours.

“These legions of nanorobotic agents were actually composed of more than 100 million flagellated bacteria – and therefore self-propelled – and loaded with drugs that moved by taking the most direct path between the drug’s injection point and the area of the body to cure, ” explains Sylvain Martel, director of the Polytechnique Montréal Nanorobotics Laboratory, who heads the research team’s work. “The drug’s propelling force was enough to travel efficiently and enter deep inside the tumours. ”

When they enter a tumour, the nanorobotic brokers can detect in an autonomous style the oxygen-depleted tumour areas wholly, referred to as hypoxic zones, and deliver the medication to them.

This hypoxic zone is established by the substantial usage of oxygen by rapidly proliferative tumour cells. Hypoxic zones are regarded as resistant to many therapies, including radiotherapy.

Gaining access in order to tumours by firmly taking paths as minute because a red blood cell plus crossing complex physiological micro-environments will not come without challenges. Therefore Martel and his team i did so it nanotechnology.

Bacteria with a Compass:

To go around, the bacteria utilized by PMartel’s team depend on two natural systems. Some sort of compass created by the formation of a chain of magnetic nanoparticles enables them to move in direction of a magnetic field, while a sensor measuring oxygen concentration enables them to reach and remain in the tumour’s active regions.

By harnessing these two transportation systems and by exposing the bacteria to a computer-controlled magnetic field, researchers showed these germs could perfectly replicate artificial nanorobots into the future designed for this kind or kind of task.

“This innovative usage of nanotransporters will have a direct effect not just on creating more complex engineering concepts and unique intervention methods, but it addittionally throws the door available to the formation of new vehicles for therapeutic broad, imaging and diagnostic brokers, ” Martel added.

“Chemotherapy, which is so toxic for the entire human body, could make use of these natural nanorobots to move drugs directly to the targeted area, eliminating the harmful side effects while also boosting its therapeutic effectiveness, ” Martel concluded.

The research is published under the title “Magneto-aerotactic bacteria deliver drug-containing nanoliposomes to tumour hypoxic regions” in the journal Nature Nanotechnology.

You like this Post, read about interior design cad software

Current Electric Vehicles Could Replace 90 Percent of Vehicles on the Road Today

post cur foto1
Nighttime image of New York, with the crimson showing a big population density. (Image thanks to Doc Searles/MIT. )

A recent study has discovered that the wholesale substitute of conventional vehicles with electric powered vehicles (EVs) can be done today and could have fun with a substantial role in meeting climate shift mitigation goals.

“ Approximately 90 percent of the non-public vehicles on the road on a daily basis could be replaced by way of a low-cost electric vehicle available on the market today, even if the cars can only charge overnight, ” said Jessika Trancik professor in energy studies at MIT and lead researcher. “[This] would more than fulfill near-term U. S. weather targets for personal vehicle take a trip. ”

Overall, today from the energy plants offering the electricity when accounting for the emissions, this would result in an 30 percent decrease in emissions from transportation approximately. Deeper emissions cuts will be realized if power plant life decarbonize over time.

Combining Two Huge Datasets:

The complete project took four years, including developing a method of integrating two massive datasets: one highly detailed group of second-by-second driving behavior predicated on GPS data and a broader, more comprehensive group of national data predicated on travel surveys. Together, both datasets encompass an incredible number of trips made by drivers all around the country.

The detailed GPS data was collected by state agencies in Texas, Georgia, and California, using special data loggers installed in cars to assess statewide traveling patterns. The more comprehensive, but less detailed, nationwide data came from a national household transportation survey, which studied households across the country to learn about how and where people actually do their driving.

The researchers needed to understand “the distances and timing of trips, the different driving behaviors, and the ambient weather conditions, ” said Zachary Needell, a graduate pupil who collaborated on the extensive analysis.

By training formulas to integrate the various sets of information and thus track one-second-resolution get cycles, the researchers could actually demonstrate that the daily power requirements of some 90 pct of personal vehicles on the highway in the U. S. could possibly be met by today’s EVs, making use of their current ranges.

The entire cost to vehicle owners – including both purchase and operating costs – will be no higher than that of conventional internal-combustion vehicles. The team viewed once-daily charging, at home or at the job, in order to study the adoption potential given today’s charging infrastructure.

What’s more, such a large-scale replacement would be sufficient to meet the nation’s stated near- phrase emissions-reduction targets for personal vehicles ’ share of the transportation field – a sector that accounts for about a third of the nation’s overall greenhouse gas emissions, with a majority of emissions from privately owned, light-duty vehicles.

Settling the EV Debate:

While EVs have numerous devotees, they also have a lot of critics, who cite range panic as a barrier to transportation electrification. “This is a concern where common sense can result in opposing views strongly, ” Trancik said. “Many appear to believe that the potential is little strongly, and the rest think that is it large. ”

“Developing the concepts plus mathematical models necessary for a testable, quantitative evaluation is helpful in these circumstances, where so much reaches stake, ” she added.

Those who have the potential is small cite the premium prices of several EVs available today, such as the highly rated but expensive Tesla models, and the still-limited range that lower-cost EVs can travel on a single charge, compared to the range of a gasoline car on one tank of gas.

The lack of available charging infrastructure in many places, and the much higher amount of time required to recharge a car compared to simply filling a gas tank are also cited as drawbacks.

post cur foto2
(Image courtesy of MIT. )

Nevertheless, the team found that the vast majority of cars on the road consume no more energy in a day than the battery energy capacity in affordable EVs available today. These numbers represent a scenario in which people would do most of their recharging overnight at home, or during the day time at work, therefore for such trips having less infrastructure was not a problem really.

Vehicles like the Ford Focus Electric or even the Nissan Leaf will be adequate to meet the requirements of almost all U. S. drivers. Although their sticker prices are greater than those of conventional vehicles still, their overall lifetime costs become comparable because of lower maintenance and operating expenses.

The Electric Vehicle Variety Barrier:

The scholarly study cautions that for EV ownership to go up to high levels, the needs of drivers need to be met on all days. For days on which energy consumption is higher, such as for vacations, or days when an intensive need for heating or cooling would sharply curb the EV’s distance range, driving needs could be met by using a different car (in a two-car home ), or by renting, or using a car-sharing service.

The study highlights the important role that car sharing of internal combustion engine vehicles could play in driving electrification. Car sharing should be very convenient for this to work, Trancik said, and requires additional business model innovation.

Additionally , the days which alternatives are needed ought to be known to drivers beforehand -information that the team’s model “TripEnergy” has the capacity to provide.

As batteries improve even, there will still be a small amount of high-energy times that exceed the number provided by electrical vehicles. For these days, other powertrain technologies will undoubtedly be needed.

The study helps policy- manufacturers to quantify the “returns” to improving batteries through buying research, for instance, and the gap that may need to be filled by some other kinds of cars, such as those fueled by low-emissions biofuels or hydrogen, to reach very low emissions levels for the transportation sector.

Another important finding from the study was that the potential for shifting to EVs is fairly uniform for different parts of the country. “The adoption potential of electric vehicles is comparable across cities remarkably, ” Trancik said, “from dense cities like NY, to sprawling metropolitan areas like Houston. This goes contrary to the view that electric automobiles – at the very least affordable ones, that have limited range – just work in dense urban centers actually. ”

You like this Post, read about interior design 3d software

British Technology Initiative Aims to Develop New Spy Gadgets

post bi foto1
(Image courtesy of MoD/Animal Dynamics. )

I’ve never been the biggest James Bond fan. Sure, I’ve observed the Connery classics, watched the Brosnan era flicks and seen a few of the newest movies even, but Bond all together has never grabbed me.

Prior to going off believing that the Bond is thought by me films are pish posh, I could say that I’ve enjoyed the moments with Bond and Q always, where a group of technological McGuffins are introduced as Bond’s new arsenal.

Well, in a recently available statement, Britain’s Ministry of Protection (MoD) has announced a fresh £800m technological initiative that appears ripped straight from the James Bond film reel.

According the MoD proposal this particular new initiative will undoubtedly be led by an Technology and Research Device (IRIS) which will forecast emerging technological trends plus assess what effects those advancements could have on Brittan’s security.

With a general notion of the near future mapped out, the IRIS team will engage “ The very best and brightest individuals and companies” asking them to pitch technological answers to IRIS’s forecast in “Dragon’s Den-style panel”.

IRIS, Dragon’s Den. Will it have more cloak-and dagger than that preposterously?

In case a project is accepted, the IRIS team will shepherd the project through completion at a separate security and defense accelerator.

“This new approach shall help with keeping Britain safe while supporting our economy, with this brightest brains keeping us before our adversaries. ” Said Protection secretary Michael Fallon.

post bi foto2
(Image thanks to MoD/University of Birmingham. )

Though the IRIS-led initiative gets off the ground, the MoD has designated some of its newest and developing technologies as a harbingers of what may come from IRIS initiative.

Off first, the MoD is creating a drone named Skeeter. Unlike additional drones, Skeeter won’t mimic a plane or helicopter. Instead, the micro-device shall take its trip cues from the dragonfly. Equipped with four wings, Skeeter will be nimble and small rendering it perfect for stealthy intelligence gathering.

Second about the MoD’s upcoming tech list is really a quantum gravimeter. Developed in cooperation with the University of Birmingham, the portable machine will use quantum technology and a pair of gravimeters to accurately map tunnel networks and underground bunkers from the surface of the planet.

Not only will this technology make it easier for the military to detect hidden enemy lairs (a seriously James Bond problem ) it could also come in handy during natural disasters, where it could be deployed to find survivors trapped amidst the rubble.

The MoD has also ominously suggested that it’s developing “laser weapons to target and defeat aerial threats. ”

It really doesn’t get more Bondian than that.

You like this Post, read about interior 3d design software

Fighting Fire with AI

post fire foto1
(Image courtesy of NASA. )

Fire-fighting, among the world’s most harmful professions arguably, could become much safer next year because of a newly developed AI program.

When firefighters enter a developing, they’re likely to use their senses and exercising to get trapped civilians and deliver them from danger. While drilled instinct and behaviors are essential tools for every firefighter, they can’t evaluate to the insight which can be gleaned from big data.

During the last 9 months the united states Department of Homeland Security and NASA’s Jet Propulsion Laboratory have already been hard at the job developing an artificial intelligence program that may leverage big data to keep firefighters safe.

Named the Associate for Understanding Data by means of Reasoning, Extraction, and sYnthesis (AUDREY) this algorithm can monitor firefighters as they undertake a structure making use of sensors embedded within the first-responders’ uniforms.

“As a firefighter moves through an atmosphere, AUDREY could send alerts through a mobile device or head-mounted display, ” said Mark James of JPL, guide scientist for the AUDREY project.

Armed with a suite of sensors that can detect heat in adjacent rooms, concentrations of dangerous gases, and detailed maps of a structure, firefighters would be able to move through a structure in the safest, most efficient manner, making it possible to save more lives and protect their own.

But sensors alone aren’t good enough to make AUDREY work. The brains of the AUDREY AI systems run on the cloud, leveraging computing power and the system’s ability to learn and make predictions about what first responders will need in the immediate future.

Though it’s only a few weeks old, the AUDREY system has already been tested in a virtual demonstration at the Public Security Broadband Stakeholder Meeting held in San Diego. During the test AUDREY was given data from a number of different sensors and was expected to give recommendations to several phantom first responders via cellular gadget. While JPL didn’t explicitly state the check went well, Edward Chow, supervisor of JPL’s Civil Program Workplace did say that inside a year AUDREY will start field demonstrations.

You like this Post, read about ideas for 3d printing

Berkeley Roboticist Learns Lessons About Humanity from Robots

Ken Goldberg starts his talk with a big idea: robots can inspire us all to be better human beings. In his TED Talk 4 lessons from robots about being human, Goldberg examines what happens to humanity as robots become more woven into society. Four different projects are discussed, along with the life lessons that Goldberg has pulled from the robots.

In 1993 Goldberg was exposed to the new world wide web by his students, and then struck with the idea that anyone in the world could use the technology to control the robots in his lab. The telegarden was a robot with a camera attached that could be controlled by remote users to have a tour of a big garden table. Users may help to water the backyard, and become given seeds to plant in the garden eventually.

post tele foto1

A random question from a student about whether or not the robot was real led Goldberg down a path of philosophical discovery. He coined a new term telepistemology – the study of knowledge at a distance. This project and the questioning of the project’s reality taught Ken to always question assumptions, both society’s and his own.

The second project discussed was born in the robot garden ideas and project concerning the robot interacting with people. The united group created a tele-actor, a person with wires, microphones and cameras that could act as a robot. The tele-actor would get into remote environments and folks watching online could knowledge what the actor was viewing and hearing, and choose what activities the tele-actor would take. Once the online neighborhood couldn’t decide how to proceed the tele-actor would proceed from gut instinct to do something. This taught Goldberg another lesson: When In Question, Improvise.

The 3rd lesson was learned when Goldberg’s father was in a healthcare facility and undergoing chemotherapy. Brachytherapy has been also being done at a healthcare facility and Goldberg caused his students to build up a robot that would focus on tumors with radiation and prevent the body’s organs. The project taught him the lesson that when your path is blocked, you pivot.

Finally Goldberg discussed the da Vinci surgical robot and giving a surgeon freedom to concentrate on the complicated parts of surgery while automating the non-essential tasks. Taking several human motion captures, dynamic time warping, iterative learning and Kalman filtering, Goldberg was able to teach the movement sequences to a robot that could, over time, work at ten times the speed of a human. This project taught the lesson that there’s no substitute for practice, practice, practice.

Ken Goldberg is a compelling speaker and does a great job of framing his projects and ideas in simple and easily understandable terms. This TED Talk is a few years old but full of incredibly interesting ideas about human-robot interactions.

post tele foto2

You like this Post, read about ideas 3d modeling software

White Light Wi-Fi

post wifi foto1
A nanocrystal-based material converts glowing blue laser emission to white lighting for combined illumination and information communication. (Image courtesy of KAUST. )

A nanocrystalline material that rapidly makes white light out of blue light could pave the way to improved visible-light communication (VLC).

While Wi-Fi and Bluetooth are now well established technologies, there are several advantages gained by shortening the wavelength of the electromagnetic waves used for transmitting information.

VLC makes use of parts of the electromagnetic spectrum that are unregulated and is potentially more energy-efficient. VCL also offers a way to combine information transmission with illumination and display technologies–for example, using ceiling lights to provide internet connections to laptops.

Many such VLC applications require light-emitting diodes (LEDs) that produce white light. These are usually fabricated by combining a diode that emits glowing blue lighting with phosphorous that turns a few of this radiation into reddish colored and green light. Nevertheless, this conversion process is not fast enough to match the speed at which the LED can be switched on and off.

“VLC using white lighting generated in this real method is limited to about a hundred million bits per second, ” mentioned Boon Ooi, professor of electric engineering at King Abdullah University of Research & Technology (KAUST).

Rather, Ooi, associate professor Osman Bakr and their co-workers work with a nanocrystal-based converter that allows much higher data rates.

The team created nanocrystals of cesium prospect bromide which were roughly eight nanometers in proportions utilizing a simple and cost-effective solution-based method that incorporated the standard nitride phosphor. When illuminated by a blue laser light, the nanocrystals emitted green light while the nitride emitted red light. Together, these combined to create a warm white light.

The researchers characterized the optical properties of their material using a technique known as femtosecond transient spectroscopy. They were able to show that the optical processes in cesium lead bromide nanocrystals occur on a time-scale of roughly seven nanoseconds. This meant they could modulate the optical emission at a frequency of 491 Megahertz, 40 occasions faster than is possible using phosphorus, and transmit data at a rate of two billion bits per second.

“The rapid response is partly due to the size of the crystals, ” said Bakr. “Spatial confinement helps it be much more likely that the electron will recombine with a hole and emit a photon. ”

Importantly, the white light source generated utilizing their perovskite nanostructures was of an excellent much like present LED technology.

“We believe that white lighting generated making use of semiconductor lasers will 1 day replace the LED white-light lights for energy-efficient light, ” said Ooi.

You like this Post, read about how to use cad software