Discover opportunities to engage students in science, technology, engineering and math (STEM) with lessons and resources inspired by the latest happenings at NASA.

› Learn more and explore the collection



Buzz Aldrin stands on the moon in his puffy, white spacesuit next to an American flag waving in the wind. The command module casts a long, dark shadow nearby.

In the News

This year marks the 50th anniversary of humans landing on the Moon. Now NASA is headed to the Moon once again, using it as a proving ground for a future human mission to Mars. Use this opportunity to get students excited about Earth's natural satellite, the amazing feats accomplished 50 years ago and plans for future exploration.

How They Did It

When NASA was founded in 1958, scientists were unsure whether the human body could even survive orbiting Earth. Space is a demanding environment. Depending on where in space you are, it can lack adequate air for breathing, be very cold or hot, and have dangerous levels of radiation. Additionally, the physics of space travel make everything inside a space capsule feel weightless even while it's hurtling through space. Floating around inside a protective spacecraft may sound fun, and it is, but it also can have detrimental effects on the human body. Plus, it can be dangerous with the hostile environment of space lurking on the other side of a thin metal shell.

In 1959, NASA's Jet Propulsion Laboratory began the Ranger project, a mission designed to impact the Moon – in other words, make a planned crash landing. During its descent, the spacecraft would take pictures that could be sent back to Earth and studied in detail. These days, aiming to merely impact a large solar system body sounds rudimentary. But back then, engineering capabilities and course-of-travel, or trajectory, mathematics were being developed for the first time. A successful impact would be a major scientific and mathematical accomplishment. In fact, it took until July 1964 to achieve the monumental task, with Ranger 7 becoming the first U.S. spacecraft to impact the near side of the Moon, capturing and returning images during its descent.

Side-by-side images of a model of the Ranger 7 spacecraft in color and a black and white image of the Moon taken by Ranger 7.

These side-by-side images show a model of the Ranger 7 spacecraft (left) and an image the spacecraft took of the Moon (right) before it impacted the surface. Image credit: NASA/JPL-Caltech | › + Expand image

After the successful Ranger 7 mission, two more Ranger missions were sent to the Moon. Then, it was time to land softly. For this task, JPL partnered with Hughes Aircraft Corporation to design and operate the Surveyor missions between 1966 and 1968. Each of the seven Surveyor landers were equipped with a television camera – with later landers carried scientific instruments, too – aimed at obtaining up-close lunar surface data to assess the Moon's suitability for a human landing. The Surveyors also demonstrated in-flight maneuvers and in-flight and surface-communications capabilities.

Side-by-side image of an astronaut next to the Surveyor 7 lander and a mosaic of images from Surveyor 3

These side-by-side images show Apollo 12 Commander Charles Conrad Jr. posing with the Surveyor 7 spacecraft on the Moon (left) and a mosaic of images taken by Surveyor 3 on the lunar surface (right). Image credits: NASA/JPL-Caltech | › + Expand image

In 1958, at the same time JPL was developing the technological capabilities to get to the Moon, NASA began the Mercury program to see if it was possible for humans to function in space. The success of the single-passenger Mercury missions, with six successful flights that placed two astronauts into suborbital flight and four astronauts into Earth orbit, kicked off the era of U.S. human spaceflight.

Cutaway illustration of the Mercury capsule with a single astronaut inside.

The success of the single-passenger Mercury capsule, shown in this illustrated diagram, proved that humans could live and work in space, paving the way for future human exploration. Image credit: NASA | › Full image and caption

In 1963, NASA's Gemini program proved that a larger capsule containing two humans could orbit Earth, allowing astronauts to work together to accomplish science in orbit for long-duration missions (up to two weeks in space) and laying the groundwork for a human mission to the Moon. With the Gemini program, scientists and engineers learned how spacecraft could rendezvous and dock while in orbit around Earth. They were also able to perfect re-entry and landing methods and began to better understand the effects of longer space flights on astronauts. After the successful Gemini missions, it was time to send humans to the Moon.

Cutaway illustration of the Gemini spacecraft with two astronauts inside.

The Gemini spacecraft, shown in this illustrated cutaway, paved the way for the Apollo missions. Image credit: NASA | › Full image and caption

The Apollo program officially began in 1963 after President John F. Kennedy directed NASA in September of 1962 to place humans on the Moon by the end of the decade. This was a formidable task as no hardware existed at the time that would accomplish the feat. NASA needed to build a giant rocket, a crew capsule and a lunar lander. And each component needed to function flawlessly.

Rapid progress was made, involving numerous NASA and contractor facilities and hundreds of thousands of workers. A crew capsule was designed, built and tested for spaceflight and landing in water by the NASA contractor North American Aviation, which eventually became part of Boeing. A lunar lander was developed by the Grumman Corporation. Though much of the astronaut training took place at or near the Manned Spacecraft Center, now known as NASA’s Johnson Space Center, in Texas, astronauts practiced lunar landings here on Earth using simulators at NASA's Dryden (now Armstrong) Flight Research Center in California and at NASA's Langley Research Center in Virginia. The enormous Saturn V rocket was a marvel of complexity. Its first stage was developed by NASA's Marshall Space Flight Center in Alabama. The upper-stage development was managed by the Lewis Flight Propulsion Center, now known as NASA's Glenn Research Center, in Ohio in partnership with North American Aviation and Douglas Aircraft Corporation, while Boeing integrated the whole vehicle. The engines were tested at what is now NASA's Stennis Space Center in Mississippi, and the rocket was transported in pieces by water for assembly at Cape Kennedy, now NASA's Kennedy Space Center, in Florida. As the Saturn V was being developed and tested, NASA also developed a smaller, interim vehicle known as the Saturn I and started using it to test Apollo hardware. A Saturn I first flew the Apollo command module design in 1964.

Unfortunately, one crewed test of the Apollo command module turned tragic in February 1967, when a fire erupted in the capsule and killed all three astronauts who had been designated as the prime crew for what became known as Apollo 1. The command module design was altered in response, delaying the first crewed Apollo launch by 21 months. In the meantime, NASA flew several uncrewed Apollo missions to test the Saturn V. The first crewed Apollo launch became Apollo 7, flown on a Saturn IB, and proved that the redesigned command module would support its crew while remaining in Earth orbit. Next, Earth-Moon trajectories were calculated for this large capsule, and the Saturn V powered Apollo 8 set off for the Moon, proving that the calculations were accurate, orbiting the Moon was feasible and a safe return to Earth was possible. Apollo 8 also provided the first TV broadcast from lunar orbit. The next few Apollo missions further proved the technology and allowed humans to practice procedures that would be needed for an eventual Moon landing.

On July 16, 1969, a Saturn V rocket launched three astronauts to the Moon on Apollo 11 from Cape Kennedy. The Apollo 11 spacecraft had three parts: a command module, called "Columbia," with a cabin for the three astronauts; a service module that provided propulsion, electricity, oxygen and water; and a lunar module, "Eagle," that provided descent to the lunar surface and ascent back to the command and service modules.

Collage of three images showing the lunar module during its descent to the Moon, on the lunar surface and during its ascent.

In this image collage, the Apollo 11 lunar module is shown on its descent to the Moon (left), on the lunar surface as Buzz Aldrin descends the stairs (middle), and on its ascent back to the command module (right). Image credit: NASA | › View full image collection

On July 20, while astronaut and command module pilot Michael Collins orbited the Moon, Neil Armstrong and Buzz Aldrin landed Eagle on the Moon and set foot on the surface, accomplishing a first for humankind. They collected regolith (surface "dirt") and rock samples, set up experiments, planted an American flag and left behind medallions honoring the Apollo 1 crew and a plaque that read, "We came in peace for all mankind."

Collage of images showing Buzz Aldrin doing various activities on the Moon.

This collage of images from the Apollo 11 Moon landing shows Buzz Aldrin posing for a photo on the Moon (left), and setting up the solar wind and seismic experiments (middle). The image on the right shows the plaque the team placed on Moon to commemorate the historic event. Image credit: NASA | › View full image collection

After 21.5 hours on the lunar surface, Armstrong and Aldrin rejoined Collins in the Columbia command module and, on July 21, headed back to Earth. On July 24, after jettisoning the service module, Columbia entered Earth's atmosphere. With its heat shield facing forward to protect the astronauts from the extreme friction heating outside the capsule, the craft slowed and a series of parachutes deployed. The module splashed down in the South Pacific Ocean, 380 kilometers (210 nautical miles) south of Johnston Atoll. Because scientists were uncertain about contamination from the Moon, the astronauts donned biological-isolation garments delivered by divers from the recovery ship, the aircraft carrier the USS Hornet. The astronauts boarded a life raft and then the USS Hornet, where the outside of their biological-isolation suits were washed down with disinfectant. To be sure no contamination was brought back to Earth from the Moon, the astronauts were quarantined until Aug. 10, at which point scientists determined the risk was low that biological contaminants or microbes had returned with the astronauts. Columbia was also disinfected and is now part of the National Air and Space Museum in Washington, D.C.

On the left, a capsule floats in the ocean while astronauts sit in a raft in a gray suits. On the right, the three astronauts smile while looking out of a small window and while Nixon faces them with a microphone in front of him.

These side-by-side images show the Apollo 11 astronauts leaving the capsule in their biological isolation garments after successfully splashing down in the South Pacific Ocean (left). At right, President Richard M. Nixon welcomes the Apollo 11 astronauts, (left to right) Neil A. Armstrong, Michael Collins and Buzz Aldrin, while they peer through the window of the Mobile Quarantine Facility aboard the USS Hornet. Image credit: NASA | › View full image collection

The Apollo program continued with six more missions to the Moon over the next three years. Astronauts placed seismometers to measure "moonquakes" and other science instruments on the lunar surface, performed science experiments, drove a carlike moon buggy on the surface, planted additional flags and returned more lunar samples to Earth for study.

Why It's Important

Apollo started out as a demonstration of America's technological, economic and political prowess, which it accomplished with the first Moon landing. But the Apollo missions accomplished even more in the realm of science and engineering.

Some of the earliest beneficiaries of Apollo research were Earth scientists. The Apollo 7 and 9 missions, which stayed in Earth orbit, took photographs of Earth in different wavelengths of light, highlighting things that might not be seen on the ground, like diseased trees and crops. This research led directly to the joint NASA-U.S. Geological Survey Landsat program, which has been studying Earth's resources from space for more than 45 years.

Samples returned from the Moon continue to be studied by scientists around the world. As new tools and techniques are developed, scientists can learn even more about our Moon, discovering clues to our planet's origins and the formation of the solar system. Additionally, educators can be certified to borrow lunar samples for use in their classrooms.

The Apollo 11 astronauts crowd around a lunar sample contained in a protective case.

The Apollo 11 astronauts take a closer look at a sample they brought back from the Moon. Image credit: NASA | › View full image collection

Perhaps the most important scientific finding came from comparing similarities in the composition of lunar and terrestrial rocks and then noting differences in the amount of specific substances. This suggested a new theory of the Moon's formation: that it accreted from debris ejected from Earth by a collision with a Mars-size object early in our planet's 4.5-billion-year history.

The 12 astronauts who walked on the Moon are the best-known faces of the Apollo program, but in numbers, they were also the smallest part of the program. About 400,000 men and women worked on Apollo, building the vehicles, calculating trajectories, even making and packing food for the crews. Many of them worked on solving a deceptively simple question: "How do we guide astronauts to the Moon and back safely?" Some built the spacecraft to carry humans to the Moon, enable surface operations and safely return astronauts to Earth. Others built the rockets that would launch these advanced spacecraft. In doing all this, NASA engineers and scientists helped lead the computing revolution from transistors to integrated circuits, the forebears to the microchip. An integrated circuit – a miniaturized electronic circuit that is used in nearly all electronic equipment today – is lighter weight, smaller and able to function on less power than the older transistors and capacitors. To suit the needs of the space capsule, NASA developed integrated circuits for use in the capsule's onboard computers. Additionally, computing advancements provided NASA with software that worked exactly as it was supposed to every time. That software lead to the development of the systems used today in retail credit-card swipe devices.

Some lesser-known benefits of the Apollo program include the technologies that commercial industries would then further advance to benefit humans right here on Earth. These "spinoffs" include technology that improved kidney dialysis, modernized athletic shoes, improved home insulation, advanced commercial and residential water filtration, and developed the freeze-drying technique for preserving foods.

Apollo was succeeded by missions that have continued to build a human presence in space and advance technologies on Earth. Hardware developed for Apollo was used to build America's first Earth-orbiting space station, Skylab. After Skylab, during the Apollo-Soyuz test project, American and Soviet spacecraft docked together, laying the groundwork for international cooperation in human spaceflight. American astronauts and Soviet cosmonauts worked together aboard the Soviet space station Mir, performing science experiments and learning about long-term space travel's effects on the human body. Eventually, the U.S. and Russia, along with 13 other nations, partnered to build and operate the International Space Station, a world-class science laboratory orbiting 400 kilometers (250 miles) above Earth, making a complete orbit every 90 minutes.

Graphic showing a possible configuration for the future lunar gateway

Although the configuration is not final, this infographic shows the current lineup of parts comprising the lunar Gateway. Image credit: NASA | › Full image and caption

And the innovations continue today. NASA is planning the Artemis mission to put humans on the Moon again in 2024 with innovative new technologies and the intent of establishing a permanent human presence. Working in tandem with commercial and international partners, NASA will develop the Space Launch System launch vehicle, Orion crew capsule, a new lunar lander and other operations hardware. The lunar Gateway – a small spaceship that will orbit the Moon and include living quarters for astronauts, a lab for science, and research and ports for visiting spacecraft – will provide access to more of the lunar surface than ever before. While at the Moon, astronauts will research ways to use lunar resources for survival and further technological development. The lessons and discoveries from Artemis will eventually pave a path for a future human mission to Mars.

Teach It

Use these standards-aligned lessons to help students learn more about Earth's only natural satellite:

As students head out for the summer, get them excited to learn more about the Moon and human exploration using these student projects:

Explore More

TAGS: K-12 Education, Teachers, Educators, Classroom, Engineering, Science, Students, Projects, Moon, Apollo, Summer

  • Ota Lutz
READ MORE

The Millennium Falcon takes on TIE fighters in a scene from 'Star Wars: The Force Awakens.'

This feature was originally published on May 3, 2016.


In the News

What do "Star Wars," NASA's Dawn spacecraft and Newton's Laws of Motion have in common? An educational lesson that turns science fiction into science fact using spreadsheets – a powerful tool for developing the scientific models addressed in the Next Generation Science Standards.

The TIE (Twin Ion Engine) fighter is a staple of the "Star Wars" universe. Darth Vader flew one in "A New Hope." Poe Dameron piloted one in "The Force Awakens." And many, many Imperial pilots met their fates in them. While the fictional TIE fighters in "Star Wars" flew a long time ago in a galaxy far, far away, ion engines are a reality in this galaxy today – and have a unique connection to NASA’s Jet Propulsion Laboratory.

Launched in 1998, the first spacecraft to use an ion engine was Deep Space 1, which flew by asteroid 9969 Braille and comet Borrelly. Fueled by the success of Deep Space 1, engineers at JPL set forth to develop the next spacecraft that would use ion propulsion. This mission, called Dawn, would take ion-powered spacecraft to the next level by allowing Dawn to go into orbit twice – around the two largest objects in the asteroid belt: Vesta and Ceres.

How Does It Work?

Ion engines rely on two principles that Isaac Newton first described in 1687. First, a positively charged atom (ion) is pushed out of the engine at a high velocity. Newton’s Third Law of Motion states that for every action there is an equal and opposite reaction, so then a small force pushes back on the spacecraft in the opposite direction – forward! According to Newton’s Second Law of Motion, there is a relationship between the force (F) exerted on an object, its mass (m) and its acceleration (a). The equation F=ma describes that relationship, and tells us that the small force applied to the spacecraft by the exiting atom provides a small amount of acceleration to the spacecraft. Push enough atoms out, and you'll get enough acceleration to really speed things up.


Why is It Important?

Compared with traditional chemical rockets, ion propulsion is faster, cheaper and safer:

  • Faster: Spacecraft powered by ion engines can reach speeds of up to 90,000 meters per second (more than 201,000 mph!)
  • Cheaper: When it comes to fuel efficiency, ion engines can reach more than 90 percent fuel efficiency, while chemical rockets are only about 35 percent efficient.
  • Safer: Ion thrusters are fueled by inert gases. Most of them use xenon, which is a non-toxic, chemically inert (no risk of exploding), odorless, tasteless and colorless gas.

These properties make ion propulsion a very attractive solution when engineers are designing spacecraft. While not every spacecraft can use ion propulsion – some need greater rates of acceleration than ion propulsion can provide – the number and types of missions using these efficient engines is growing. In addition to being used on the Dawn spacecraft and communication satellites orbiting Earth, ion propulsion could be used to boost the International Space Station into higher orbits and will likely be a part of many future missions exploring our own solar system.

Teach It

Newton’s Laws of Motion are an important part of middle and high school physical science and are addressed specifically by the Next Generation Science Standards as well as Common Core Math standards. The lesson "Ion Propulsion: Using Spreadsheets to Model Additive Velocity" lets students study the relationship between force, mass and acceleration as described by Newton's Second Law as they develop spreadsheet models that apply those principles to real-world situations.

This lesson meets the following Next Generation Science and Common Core Math Standards:

NGSS Standards:

  • MS-PS2-2: Plan an investigation to provide evidence that the change in an object’s motion depends on the sum of the forces on the object and the mass of the object.
  • HS-PS2-1: Analyze data to support the claim that Newton’s second law of motion describes the mathematical relationship among the net force on a macroscopic object, its mass, and its acceleration.
  • HS-PS2-1: Use mathematical representations to support the claim that the total momentum of a system of objects is conserved when there is no net force on the system.

Common Core Math Standards:

  • Grade 8: Expressions and Equations A.4: Perform operations with numbers expressed in scientific notation, including problems where both decimal and scientific notation are used. Use scientific notation and choose units of appropriate size for measurements of very large or very small quantities (e.g., use millimeters per year for seafloor spreading). Interpret scientific notation that has been generated by technology.
  • High School: Algebra CED.A.4: Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations.
  • High School: Functions LE.A: Construct and compare linear, quadratic, and exponential models and solve problems.
  • High School: Functions BF.A.1: Write a function that describes a relationship between two quantities.
  • High School: Statistics and Probability ID.C: Interpret linear Models
  • High School: Number and Quantity Q.A.1: Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in graphs and data displays."

Explore More

TAGS: May the Fourth, Star Wars Day, F=ma, ion propulsion, Dawn, Deep Space 1, lesson, classroom activity, NGSS, Common Core Math

  • Lyle Tavernier
READ MORE

A glowing, orange ring outlines a black hole.

In the News

Accomplishing what was previously thought to be impossible, a team of international astronomers has captured an image of a black hole’s silhouette. Evidence of the existence of black holes – mysterious places in space where nothing, not even light, can escape – has existed for quite some time, and astronomers have long observed the effects on the surroundings of these phenomena. In the popular imagination, it was thought that capturing an image of a black hole was impossible because an image of something from which no light can escape would appear completely black. For scientists, the challenge was how, from thousands or even millions of light-years away, to capture an image of the hot, glowing gas falling into a black hole. An ambitious team of international astronomers and computer scientists has managed to accomplish both. Working for well over a decade to achieve the feat, the team improved upon an existing radio astronomy technique for high-resolution imaging and used it to detect the silhouette of a black hole – outlined by the glowing gas that surrounds its event horizon, the precipice beyond which light cannot escape. Learning about these mysterious structures can help students understand gravity and the dynamic nature of our universe, all while sharpening their math skills.

How They Did It

Though scientists had theorized they could image black holes by capturing their silhouettes against their glowing surroundings, the ability to image an object so distant still eluded them. A team formed to take on the challenge, creating a network of telescopes known as the Event Horizon Telescope, or the EHT. They set out to capture an image of a black hole by improving upon a technique that allows for the imaging of far-away objects, known as Very Long Baseline Interferometry, or VLBI.

Telescopes of all types are used to see distant objects. The larger the diameter, or aperture, of the telescope, the greater its ability to gather more light and the higher its resolution (or ability to image fine details). To see details in objects that are far away and appear small and dim from Earth, we need to gather as much light as possible with very high resolution, so we need to use a telescope with a large aperture.

That’s why the VLBI technique was essential to capturing the black hole image. VLBI works by creating an array of smaller telescopes that can be synchronized to focus on the same object at the same time and act as a giant virtual telescope. In some cases, the smaller telescopes are also an array of multiple telescopes. This technique has been used to track spacecraft and to image distant cosmic radio sources, such as quasars.

More than a dozen antennas pointing forward sit on barren land surrounded by red and blue-purple mountains in the distance.

Making up one piece of the EHT array of telescopes, the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile has 66 high-precision antennas. Image credit: NRAO/AUI/NSF | + Expand image

The aperture of a giant virtual telescope such as the Event Horizon Telescope is as large as the distance between the two farthest-apart telescope stations – for the EHT, those two stations are at the South Pole and in Spain, creating an aperture that’s nearly the same as the diameter of Earth. Each telescope in the array focuses on the target, in this case the black hole, and collects data from its location on Earth, providing a portion of the EHT’s full view. The more telescopes in the array that are widely spaced, the better the image resolution.

This video shows the global network of radio telescopes in the EHT array that performed observations of the black hole in the galaxy M87. Credit: C. Fromm and L. Rezzolla (Goethe University Frankfurt)/Black Hole Cam/EHT Collaboration | Watch on YouTube

To test VLBI for imaging a black hole and a number of computer algorithms for sorting and synchronizing data, the Event Horizon Telescope team decided on two targets, each offering unique challenges.

The closest supermassive black hole to Earth, Sagittarius A*, interested the team because it is in our galactic backyard – at the center of our Milky Way galaxy, 26,000 light-years (156 quadrillion miles) away. (An asterisk is the astronomical standard for denoting a black hole.) Though not the only black hole in our galaxy, it is the black hole that appears largest from Earth. But its location in the same galaxy as Earth meant the team would have to look through “pollution” caused by stars and dust to image it, meaning there would be more data to filter out when processing the image. Nevertheless, because of the black hole’s local interest and relatively large size, the EHT team chose Sagittarius A* as one of its two targets.

An image showing a smattering of orange stars against the black backdrop of space with a small black circle in the middle and a rectangle identifying the location of the M87 black hole.

A close-up image of the core of the M87 galaxy, imaged by the Chandra X-ray Observatory. Image credit: NASA/CXC/Villanova University/J. Neilsen | + Expand image

A blue jet extends from a bright yellow point surrounded by smaller yellow stars.

This image from NASA's Hubble Space Telescope shows a jet of subatomic particles streaming from the center of M87*. Image credits: NASA and the Hubble Heritage Team (STScI/AURA) | + Expand image

The second target was the supermassive black hole M87*. One of the largest known supermassive black holes, M87* is located at the center of the gargantuan elliptical galaxy Messier 87, or M87, 53 million light-years (318 quintillion miles) away. Substantially more massive than Sagittarius A*, which contains 4 million solar masses, M87* contains 6.5 billion solar masses. One solar mass is equivalent to the mass of our Sun, approximately 2x10^30 kilograms. In addition to its size, M87* interested scientists because, unlike Sagittarius A*, it is an active black hole, with matter falling into it and spewing out in the form of jets of particles that are accelerated to velocities near the speed of light. But its distance made it even more of a challenge to capture than the relatively local Sagittarius A*. As described by Katie Bouman, a computer scientist with the EHT who led development of one of the algorithms used to sort telescope data during the processing of the historic image, it’s akin to capturing an image of an orange on the surface of the Moon.

By 2017, the EHT was a collaboration of eight sites around the world – and more have been added since then. Before the team could begin collecting data, they had to find a time when the weather was likely to be conducive to telescope viewing at every location. For M87*, the team tried for good weather in April 2017 and, of the 10 days chosen for observation, a whopping four days were clear at all eight sites!

Each telescope used for the EHT had to be highly synchronized with the others to within a fraction of a millimeter using an atomic clock locked onto a GPS time standard. This degree of precision makes the EHT capable of resolving objects about 4,000 times better than the Hubble Space Telescope. As each telescope acquired data from the target black hole, the digitized data and time stamp were recorded on computer disk media. Gathering data for four days around the world gave the team a substantial amount of data to process. The recorded media were then physically transported to a central location because the amount of data, around 5 petabytes, exceeds what the current internet speeds can handle. At this central location, data from all eight sites were synchronized using the time stamps and combined to create a composite set of images, revealing the never-before-seen silhouette of M87*’s event horizon. The team is also working on generating an image of Sagittarius A* from additional observations made by the EHT.

This zoom video starts with a view of the ALMA telescope array in Chile and zooms in on the heart of M87, showing successively more detailed observations and culminating in the first direct visual evidence of a supermassive black hole’s silhouette. Credit: ESO/L. Calçada, Digitized Sky Survey 2, ESA/Hubble, RadioAstron, De Gasperin et al., Kim et al., EHT Collaboration. Music: Niklas Falcke | Watch on YouTube

As more telescopes are added and the rotation of Earth is factored in, more of the image can be resolved, and we can expect future images to be higher resolution. But we might never have a complete picture, as Katie Bouman explains here (under “Imaging a Black Hole”).

To complement the EHT findings, several NASA spacecraft were part of a large effort to observe the black hole using different wavelengths of light. As part of this effort, NASA’s Chandra X-ray Observatory, Nuclear Spectroscopic Telescope Array (NuSTAR) and Neil Gehrels Swift Observatory space telescope missions – all designed to detect different varieties of X-ray light – turned their gaze to the M87 black hole around the same time as the EHT in April 2017. NASA’s Fermi Gamma-ray Space Telescope was also watching for changes in gamma-ray light from M87* during the EHT observations. If the EHT observed changes in the structure of the black hole’s environment, data from these missions and other telescopes could be used to help figure out what was going on.

Though NASA observations did not directly trace out the historic image, astronomers used data from Chandra and NuSTAR satellites to measure the X-ray brightness of M87*’s jet. Scientists used this information to compare their models of the jet and disk around the black hole with the EHT observations. Other insights may come as researchers continue to pore over these data.

Why It's Important

Learning about mysterious structures in the universe provides insight into physics and allows us to test observation methods and theories, such as Einstein’s theory of general relativity. Massive objects deform spacetime in their vicinity, and although the theory of general relativity has directly been proven accurate for smaller-mass objects, such as Earth and the Sun, the theory has not yet been directly proven for black holes and other regions containing dense matter.

One of the main results of the EHT black hole imaging project is a more direct calculation of a black hole’s mass than ever before. Using the EHT, scientists were able to directly observe and measure the radius of M87*’s event horizon, or its Schwarzschild radius, and compute the black hole’s mass. That estimate was close to the one derived from a method that uses the motion of orbiting stars – thus validating it as a method of mass estimation.

The size and shape of a black hole, which depend on its mass and spin, can be predicted from general relativity equations. General relativity predicts that this silhouette would be roughly circular, but other theories of gravity predict slightly different shapes. The image of M87* shows a circular silhouette, thus lending credibility to Einstein’s theory of general relativity near black holes.

An illustration of a black hole surrounded by a bright, colorful swirl of material. Text describes each part of the black hole and its surroundings.

This artist’s impression depicts a rapidly spinning supermassive black hole surrounded by an accretion disc. Image credit: ESO | + Expand image

The data also offer some insight into the formation and behavior of black hole structures, such as the accretion disk that feeds matter into the black hole and plasma jets that emanate from its center. Scientists have hypothesized about how an accretion disk forms, but they’ve never been able to test their theories with direct observation until now. Scientists are also curious about the mechanism by which some supermassive black holes emit enormous jets of particles traveling at near light-speed.

These questions and others will be answered as more data is acquired by the EHT and synthesized in computer algorithms. Be sure to stay tuned for that and the next expected image of a black hole – our Milky Way’s own Sagittarius A*.

Teach It

Capture your students’ enthusiasm about black holes by challenging them to solve these standards-aligned math problems.

Model black-hole interaction with this NGSS-aligned lesson:

Explore More


Check out these related resources for students from NASA’s Space Place

TAGS: Black Hole, Teachable Moments, Science, K-12 Education, Teachers, Educators

  • Ota Lutz
READ MORE

Illustration of spacecraft against a starry background

Update: March 15, 2019 – The answers to the 2018 NASA Pi Day Challenge are here! View the illustrated answer key


In the News

The excitement of Pi Day – and our annual excuse to chow down on pie – is upon us! The holiday celebrating the mathematical constant pi arrives on March 14, and with it comes the sixth installment of the NASA Pi Day Challenge from the Jet Propulsion Laboratory’s Education Office. This challenge gives students in grades 6-12 a chance to solve four real-world problems faced by NASA scientists and engineers. (Even if you’re done with school, they’re worth a try for the bragging rights.)

https://www.jpl.nasa.gov/edu/teach/activity/pi-in-the-sky-6/

Visit the "Pi in the Sky 6" lesson page to explore classroom resources and downloads for the 2019 NASA Pi Day Challenge. Image credit: NASA/JPL-Caltech/Kim Orr | + Expand image

Why March 14?

Pi, the ratio of a circle’s circumference to its diameter, is what is known as an irrational number. As an irrational number, its decimal representation never ends, and it never repeats. Though it has been calculated to trillions of digits, we use far fewer at NASA. In fact, 3.14 is a good approximation, which is why March 14 (or 3/14 in U.S. month/day format) came to be the date that we celebrate this mathematical marvel.

The first-known Pi Day celebration occurred in 1988. In 2009, the U.S. House of Representatives passed a resolution designating March 14 as Pi Day and encouraging teachers and students to celebrate the day with activities that teach students about pi.

The 2019 Challenge

This year’s NASA Pi Day Challenge features four planetary puzzlers that show students how pi is used at the agency. The challenges involve weathering a Mars dust storm, sizing up a shrinking storm on Jupiter, estimating the water content of a rain cloud on Earth and blasting ice samples with lasers!

›Take on the 2019 NASA Pi Day Challenge!

The Science Behind the Challenge

In late spring of 2018, a dust storm began stretching across Mars and eventually nearly blanketed the entire planet in thick dust. Darkness fell across Mars’ surface, blocking the vital sunlight that the solar-powered Opportunity rover needed to survive. It was the beginning of the end for the rover’s 15-year mission on Mars. At its height, the storm covered all but the peak of Olympus Mons, the largest known volcano in the solar system. In the Deadly Dust challenge, students must use pi to calculate what percentage of the Red Planet was covered by the dust storm.

The Terra satellite, orbiting Earth since 1999, uses the nine cameras on its Multi-Angle Imaging SpectroRadiometer, or MISR, instrument to provide scientists with unique views of Earth, returning data about atmospheric particles, land-surface features and clouds. Estimating the amount of water in a cloud, and the potential for rainfall, is serious business. Knowing how much rain may fall in a given area can help residents and first responders prepare for emergencies like flooding and mudslides. In Cloud Computing, students can use their knowledge of pi and geometric shapes to estimate the amount of water contained in a cloud.

Jupiter’s Great Red Spot, a giant storm that has been fascinating observers since the early 19th century, is shrinking. The storm has been continuously observed since the 1830s, but measurements from spacecraft like Voyager, the Hubble Space Telescope and Juno indicate the storm is getting smaller. How much smaller? In Storm Spotter, students can determine the answer to that very question faced by scientists.

Scientists studying ices found in space, such as comets, want to understand what they’re made of and how they interact and react with the environment around them. To see what molecules may form in space when a comet comes into contact with solar wind or sunlight, scientists place an ice sample in a vacuum and then expose it to electrons or ultraviolet photons. Scientists have analyzed samples in the lab and detected molecules that were later observed in space on comet 67P/Churyumov-Gerasimenko. To analyze the lab samples, an infrared laser is aimed at the ice, causing it to explode. But the ice will explode only if the laser is powerful enough. Scientist use pi to figure out how strong the laser needs to be to explode the sample – and students can do the same when they solve the Icy Intel challenge.

Explore More

Participate

Join the conversation and share your Pi Day Challenge answers with @NASAJPL_Edu on social media using the hashtag #NASAPiDayChallenge

Blogs and Features

Related Activities

Multimedia

Facts and Figures

Missions and Instruments

Websites

TAGS: Pi Day, K-12, STEM, Science, Engineering, Technology, Math, Pi, Educators, Teachers, Informal Education, Museums

  • Lyle Tavernier
READ MORE

In the News

This summer, a global dust storm encircled Mars, blocking much of the vital solar energy that NASA’s Opportunity rover needs to survive. After months of listening for a signal, the agency has declared that the longest-lived rover to explore Mars has come to the end of its mission. Originally slated for a three-month mission, the Opportunity rover lived a whopping 14.5 years on Mars. Opportunity beat the odds many times while exploring the Red Planet, returning an abundance of scientific data that paved the way for future exploration.

Scientists and engineers are celebrating this unprecedented mission success, still analyzing data collected during the past decade and a half and applying lessons learned to the design of future spacecraft. For teachers, this historic mission provides lessons in engineering design, troubleshooting and scientific discovery.

How They Did It

Launched in 2003 and landed in early 2004, the twin Mars Exploration Rovers, Spirit and Opportunity, were the second spacecraft of their kind to land on our neighboring planet.

Preceded by the small Sojourner rover in 1997, Spirit and Opportunity were substantially larger, weighing about 400 pounds, or 185 kilograms, on Earth (150 pounds, or 70 kilograms, on Mars) and standing about 5 feet tall. The solar-powered rovers were designed for a mission lasting 90 sols, or Mars days, during which they would look for evidence of water on the seemingly barren planet.

Dust in the Wind

Scientists and engineers always hope a spacecraft will outlive its designed lifetime, and the Mars Exploration Rovers did not disappoint. Engineers at NASA’s Jet Propulsion Laboratory in Pasadena, California, expected the lifetime of these sun-powered robots to be limited by dust accumulating on the rovers’ solar panels. As expected, power input to the rovers slowly decreased as dust settled on the panels and blocked some of the incoming sunlight. However, the panels were “cleaned” accidentally when seasonal winds blew off the dust. Several times during the mission, power levels were restored to pre-dusty conditions. Because of these events, the rovers were able to continue their exploration much longer than expected with enough power to continue running all of their instruments.

Side-by-side images of Opportunity on Mars, showing dust on its solar panels and then relatively clean solar panels

A self-portrait of NASA's Mars Exploration Rover Opportunity taken in late March 2014 (right) shows that much of the dust on the rover's solar arrays was removed since a similar portrait from January 2014 (left). Image Credit: NASA/JPL-Caltech/Cornell Univ./Arizona State Univ. | › Full image and caption

Terrestrial Twin

To troubleshoot and overcome challenges during the rovers’ long mission, engineers would perform tests on a duplicate model of the spacecraft, which remained on Earth for just this purpose. One such instance was in 2005, when Opportunity got stuck in the sand. Its right front wheel dug into loose sand, reaching to just below its axle. Engineers and scientists worked for five weeks to free Opportunity, first using images and spectroscopy obtained by the rover’s instruments to recreate the sand trap on Earth and then placing the test rover in the exact same position as Opportunity. The team eventually found a way to get the test rover out of the sand trap. Engineers tested their commands repeatedly with consistent results, giving them confidence in their solution. The same commands were relayed to Opportunity through NASA’s Deep Space Network, and the patient rover turned its stuck wheel just the right amount and backed out of the trap that had ensnared it for over a month, enabling the mission to continue.

Engineers test moves on a model of the Opportunity rover in the In-Situ Instrument Laboratory at JPL

Inside the In-Situ Instrument Laboratory at JPL, rover engineers check how a test rover moves in material chosen to simulate some difficult Mars driving conditions. | › Full image and caption

A few years later, in 2009, Spirit wasn’t as lucky. Having already sustained some wheel problems, Spirit got stuck on a slope in a position that would not be favorable for the Martian winter. Engineers were not able to free Spirit before winter took hold, denying the rover adequate sunlight for power. Its mission officially ended in 2011. Meanwhile, despite a troubled shoulder joint on its robotic arm that first started showing wear in 2006, Opportunity continued exploring the Red Planet. It wasn’t until a dust storm completely enveloped Mars in the summer of 2018 that Opportunity finally succumbed to the elements.

The Final Act

animation showing a dust storm moving across Mars

This set of images from NASA’s Mars Reconnaissance Orbiter (MRO) shows a giant dust storm building up on Mars in 2018, with rovers on the surface indicated as icons. Image credit: NASA/JPL-Caltech/MSSS | › Full image and caption

simulated views of the sun as the 2018 dust storm darkened from Opportunity's perspective on Mars

This series of images shows simulated views of a darkening Martian sky blotting out the Sun from NASA’s Opportunity rover’s point of view in the 2018 global dust storm. Each frame corresponds to a tau value, or measure of opacity: 1, 3, 5, 7, 9, 11. Image credit: NASA/JPL-Caltech/TAMU | › Full image and caption

Dust storm season on Mars can be treacherous for solar-powered rovers because if they are in the path of the dust storm, their access to sunlight can be obstructed for months on end, longer than their batteries can sustain them. Though several dust storms occurred on Mars during the reign of the Mars Exploration Rovers, 2018 brought a large, thick dust storm that covered the entire globe and shrouded Opportunity’s access to sunlight for four months. Only the caldera of Olympus Mons, the largest known volcano in the solar system, peeked out above the dust.

The transparency or “thickness” of the dust in Mars’ atmosphere is denoted by the Greek letter tau. The higher the tau, the less sunlight is available to charge a surface spacecraft’s batteries. An average tau for Opportunity’s location is 0.5. The tau at the peak of the 2018 dust storm was 10.8. This thick dust was imaged and measured by the Curiosity Mars rover on the opposite side of the planet. (Curiosity is powered by a radioisotope thermoelectric generator.)

Since the last communication with Opportunity on June 10, 2018, NASA has sent more than 1,000 commands to the rover that have gone unanswered. Each of these commands was an attempt to get Opportunity to send back a signal saying it was alive. A last-ditch effort to reset the rover’s mission clock was met with silence.

Why It’s Important

The Mars Exploration Rovers were designed to give a human-height perspective of Mars, using panoramic cameras approximately 5 feet off the surface, while their science instruments investigated Mars’ surface geology for signs of water. Spirit and Opportunity returned more than 340,000 raw images conveying the beauty of Mars and leading to scientific discoveries. The rovers brought Mars into classrooms and living rooms around the world. From curious geologic formations to dune fields, dust devils and even their own tracks on the surface of the Red Planet, the rovers showed us Mars in a way we had never seen it before.

tracks on Mars with a patch of white soil showing

This mosaic shows an area of disturbed soil made by the Spirit rover's stuck right front wheel. The trench exposed a patch of nearly pure silica, with the composition of opal. Image credit: NASA/JPL-Caltech/Cornell | › Full image and caption

Mineral vein on the surface of Mars

This color view of a mineral vein was taken by the Mars rover Opportunity on Nov. 7, 2011. Image credit: NASA/JPL-Caltech/Cornell/ASU | › Full image and caption

The rovers discovered that Mars was once a warmer, wetter world than it is today and was potentially able to support microbial life. Opportunity landed in a crater and almost immediately discovered deposits of hematite, which is a mineral known to typically form in the presence of water. During its travels across the Mars surface, Spirit found rocks rich in magnesium and iron carbonates that likely formed when Mars was warm and wet, and sustained a near-neutral pH environment hospitable to life. At one point, while dragging its malfunctioning wheel, Spirit excavated 90 percent pure silica lurking just below the sandy surface. On Earth, this sort of silica usually exists in hot springs or hot steam vents, where life as we know it often finds a happy home. Later in its mission, near the rim of Endeavor crater, Opportunity found bright-colored veins of gypsum in the rocks. These veins likely formed when water flowed through underground fractures in the rocks, leaving calcium behind. All of these discoveries lead scientists to believe that Mars was once more hospitable to life than it is today, and they laid the groundwork for future exploration.

Imagery from the Mars Reconnaissance Orbiter and Mars Odyssey, both orbiting the Red Planet, has been combined with surface views and data from the Mars Exploration Rovers for an unprecedented understanding of the planet’s geology and environment.

Not only did Spirit and Opportunity add to our understanding of Mars, but also the rovers set the stage for future exploration. Following in their tracks, the Curiosity rover landed in 2012 and is still active, investigating the planet’s surface chemistry and geology, and confirming the presence of past water. Launching in 2020 is the next Mars rover, currently named Mars 2020. Mars 2020 will be able to analyze soil samples for signs of past microbial life. It will carry a drill that can collect samples of interesting rocks and soils, and set them aside in a cache on the surface of Mars. In the future, those samples could be retrieved and returned to Earth by another mission. Mars 2020 will also do preliminary research for future human missions to the Red Planet, including testing a method of producing oxygen from Mars’ atmosphere.

It’s thanks to three generations of surface-exploring rovers coupled with the knowledge obtained by orbiters and stationary landers that we have a deeper understanding of the Red Planet’s geologic history and can continue to explore Mars in new and exciting ways.

Teach It

Use these standards-aligned lessons and related activities to get students doing engineering, troubleshooting and scientific discovery just like NASA scientists and engineers!

Explore More

Try these related resources for students from NASA’s Space Place

TAGS: K-12 Education, Teachers, Educators, Students, Opportunity, Mars rover, Rovers, Mars, Lessons, Activities, Missions

  • Ota Lutz
READ MORE

The supermoon lunar eclipse captured as it moved over NASA’s Glenn Research Center on September 27, 2015.

In the News

Looking up at the Moon can create a sense of awe at any time, but those who do so on the evening of January 20 will be treated to the only total lunar eclipse of 2019. Visible for its entirety in North and South America, this eclipse is being referred to by some as a super blood moon – “super” because the Moon will be closest to Earth in its orbit during the full moon (more on supermoons here) and “blood" because the total lunar eclipse will turn the Moon a reddish hue (more on that below). This is a great opportunity for students to observe the Moon – and for teachers to make connections to in-class science content.

How It Works

Eclipses can occur when the Sun, the Moon and Earth align. Lunar eclipses can happen only during a full moon, when the Moon and the Sun are on opposite sides of Earth. At that point, the Moon can move into the shadow cast by Earth, resulting in a lunar eclipse. However, most of the time, the Moon’s slightly tilted orbit brings it above or below Earth’s shadow.

Watch on YouTube

The time period when the Moon, Earth and the Sun are lined up and on the same plane – allowing for the Moon to pass through Earth’s shadow – is called an eclipse season. Eclipse seasons last about 34 days and occur just shy of every six months. When a full moon occurs during an eclipse season, the Moon travels through Earth’s shadow, creating a lunar eclipse.

Graphic showing the alignment of the Sun, Earth and Moon when a full moon occurs during an eclipse season versus a non-eclipse season

When a full moon occurs during an eclipse season, the Moon travels through Earth's shadow, creating a lunar eclipse. Credit: NASA/JPL-Caltech | + Enlarge image

Unlike solar eclipses, which require special glasses to view and can be seen only for a few short minutes in a very limited area, a total lunar eclipse can be seen for about an hour by anyone on the nighttime side of Earth – as long as skies are clear.

What to Expect

The Moon passes through two distinct parts of Earth’s shadow during a lunar eclipse. The outer part of the cone-shaped shadow is called the penumbra. The penumbra is less dark than the inner part of the shadow because it’s penetrated by some sunlight. (You have probably noticed that some shadows on the ground are darker than others, depending on how much outside light enters the shadow; the same is true for the outer part of Earth’s shadow.) The inner part of the shadow, known as the umbra, is much darker because Earth blocks additional sunlight from entering the umbra.

At 6:36 p.m. PST (9:36 p.m. EST) on January 20, the edge of the Moon will begin entering the penumbra. The Moon will dim very slightly for the next 57 minutes as it moves deeper into the penumbra. Because this part of Earth’s shadow is not fully dark, you may notice only some dim shading (if anything at all) on the Moon near the end of this part of the eclipse.

Graphic showing the positions of the Moon, Earth and Sun during a partial lunar eclipse

During a total lunar eclipse, the Moon first enters into the penumbra, or the outer part of Earth's shadow, where the shadow is still penetrated by some sunlight. Credit: NASA | + Enlarge image

At 7:33 p.m. PST (10:33 p.m. EST), the edge of the Moon will begin entering the umbra. As the Moon moves into the darker shadow, significant darkening of the Moon will be noticeable. Some say that during this part of the eclipse, the Moon looks as if it has had a bite taken out of it. That “bite” gets bigger and bigger as the Moon moves deeper into the shadow.

The Moon as seen during a partial lunar eclipse

As the Moon starts to enter into the umbra, the inner and darker part of Earth's shadow, it appears as if a bite has been taken out of the Moon. This "bite" will grow until the Moon has entered fully into the umbra. Credit: NASA | + Enlarge image

At 8:41 p.m. PST (11:41 p.m. EST), the Moon will be completely inside the umbra, marking the beginning of the total lunar eclipse. The moment of greatest eclipse, when the Moon is halfway through the umbra, occurs at 9:12 p.m. PST (12:12 a.m. EST).

Graphic showing the Moon inside the umbra

The total lunar eclipse starts once the moon is completely inside the umbra. And the moment of greatest eclipse happens with the Moon is halfway through the umbra as shown in this graphic. Credit: NASA | + Enlarge image

As the Moon moves completely into the umbra, something interesting happens: The Moon begins to turn reddish-orange. The reason for this phenomenon? Earth’s atmosphere. As sunlight passes through it, the small molecules that make up our atmosphere scatter blue light, which is why the sky appears blue. This leaves behind mostly red light that bends, or refracts, into Earth’s shadow. We can see the red light during an eclipse as it falls onto the Moon in Earth’s shadow. This same effect is what gives sunrises and sunsets a reddish-orange color.

The Moon as seen during a total lunar eclipse at the point of greatest eclipse

As the Moon moves completely into the umbra, it turns a reddish-orange color. Credit: NASA | + Enlarge image

A variety of factors affect the appearance of the Moon during a total lunar eclipse. Clouds, dust, ash, photochemical droplets and organic material in the atmosphere can change how much light is refracted into the umbra. Additionally, the January 2019 lunar eclipse takes place when the full moon is at or near the closest point in its orbit to Earth – a time popularly known as a supermoon. This means the Moon is deeper inside the umbra shadow and therefore may appear darker. The potential for variation provides a great opportunity for students to observe and classify the lunar eclipse based on its brightness. Details can be found in the “Teach It” section below.

At 9:43 p.m. PST (12:43 a.m. EST), the edge of the Moon will begin exiting the umbra and moving into the opposite side of the penumbra. This marks the end of the total lunar eclipse.

At 10:50 p.m. PST (1:50 a.m. EST), the Moon will be completely outside the umbra. It will continue moving out of the penumbra until the eclipse ends at 11:48 p.m (2:48 a.m. EST).

What if it’s cloudy where you live? Winter eclipses always bring with them the risk of poor viewing conditions. If your view of the Moon is obscured by the weather, explore options for watching the eclipse online, such as the Time and Date live stream.

Why It’s Important

Lunar eclipses have long played an important role in understanding Earth and its motions in space.

In ancient Greece, Aristotle noted that the shadows on the Moon during lunar eclipses were round, regardless of where an observer saw them. He realized that only if Earth were a spheroid would its shadows be round – a revelation that he and others had many centuries before the first ships sailed around the world.

Earth wobbles on its axis like a spinning top that’s about to fall over, a phenomenon called precession. Earth completes one wobble, or precession cycle, over the course of 26,000 years. Greek astronomer Hipparchus made this discovery by comparing the position of stars relative to the Sun during a lunar eclipse to those recorded hundreds of years earlier. A lunar eclipse allowed him to see the stars and know exactly where the Sun was for comparison – directly opposite the Moon. If Earth didn’t wobble, the stars would appear to be in the same place they were hundreds of years earlier. When Hipparchus saw that the stars’ positions had indeed moved, he knew that Earth must wobble on its axis!

Lunar eclipses are also used for modern-day science investigations. Astronomers have used ancient eclipse records and compared them with computer simulations. These comparisons helped scientists determine the rate at which Earth’s rotation is slowing.

Teach It

Ask students to observe the lunar eclipse and evaluate the Moon’s brightness using the Danjon Scale of Lunar Eclipse Brightness. The Danjon scale illustrates the range of colors and brightness the Moon can take on during a total lunar eclipse, and it’s a tool observers can use to characterize the appearance of an eclipse. View the lesson guide below. After the eclipse, have students compare and justify their evaluations of the eclipse.

Use these standards-aligned lessons and related activities to get your students excited about the eclipse, Moon phases and Moon observations:

TAGS: Lunar Eclipse, Moon, Teachers, Educators, K-12 Education, Astronomy

  • Lyle Tavernier
READ MORE

This illustration shows the position of NASA's Voyager 1 and Voyager 2 probes, outside of the heliosphere, a protective bubble created by the Sun that extends well past the orbit of Pluto.

In the News

The Voyager 2 spacecraft, launched in 1977, has reached interstellar space, a region beyond the heliosphere – the protective bubble of particles and magnetic fields created by the Sun – where the only other human-made object is its twin, Voyager 1.

The achievement means new opportunities for scientists to study this mysterious region. And for educators, it’s a chance to get students exploring the scale and anatomy of our solar system, plus the engineering and math required for such an epic journey.

How They Did It

Launched just 16 days apart, Voyager 1 and Voyager 2 were designed to take advantage of a rare alignment of the outer planets that only occurs once every 176 years. Their trajectory took them by the outer planets, where they captured never-before-seen images. They were also able to steal a little momentum from Jupiter and Saturn that helped send them on a path toward interstellar space. This “gravity assist” gave the spacecraft a velocity boost without expending any fuel. Though both spacecraft were destined for interstellar space, they followed slightly different trajectories.

Illustration of the trajectories of Voyager 1 and 2

An illustration of the trajectories of Voyager 1 and Voyager 2. Image credit: NASA/JPL-Caltech | + Expand image

Voyager 1 followed a path that enabled it to fly by Jupiter in 1979, discovering the gas giant’s rings. It continued on for a 1980 close encounter with Saturn’s moon Titan before a gravity assist from Saturn hurled it above the plane of the solar system and out toward interstellar space. After Voyager 2 visited Jupiter in 1979 and Saturn in 1981, it continued on to encounter Uranus in 1986, where it obtained another assist. Its last planetary visit before heading out of the solar system was Neptune in 1989, where the gas giant’s gravity sent the probe in a southward direction toward interstellar space. Since the end of its prime mission at Neptune, Voyager 2 has been using its onboard instruments to continue sensing the environment around it, communicating data back to scientists on Earth. It was this data that scientists used to determine Voyager 2 had entered interstellar space.

How We Know

Interstellar space, the region between the stars, is beyond the influence of the solar wind, charged particles emanating from the Sun, and before the influence of the stellar wind of another star. One hint that Voyager 2 was nearing interstellar space came in late August when the Cosmic Ray Subsystem, an instrument that measures cosmic rays coming from the Sun and galactic cosmic rays coming from outside our solar system, measured an increase in galactic cosmic rays hitting the spacecraft. Then on November 5, the instrument detected a sharp decrease in high energy particles from the Sun. That downward trend continued over the following weeks.

The data from the cosmic ray instrument provided strong evidence that Voyager 2 had entered interstellar space because its twin had returned similar data when it crossed the boundary of the heliosheath. But the most compelling evidence came from its Plasma Science Experiment – an instrument that had stopped working on Voyager 1 in 1980. Until recently, the space surrounding Voyager 2 was filled mostly with plasma flowing out from our Sun. This outflow, called the solar wind, creates a bubble, the heliosphere, that envelopes all the planets in our solar system. Voyager 2’s Plasma Science Experiment can detect the speed, density, temperature, pressure and flux of that solar wind. On the same day that the spacecraft’s cosmic ray instrument detected a steep decline in the number of solar energetic particles, the plasma science instrument observed a decline in the speed of the solar wind. Since that date, the plasma instrument has observed no solar wind flow in the environment around Voyager 2, which makes mission scientists confident the probe has entered interstellar space.

graph showing data from the cosmic ray and plasma science instruments on Voyager 2

This animated graph shows data returned from Voyager 2's cosmic ray and plasma science instruments, which provided the evidence that the spacecraft had entered interstellar space. Image credit: NASA/JPL-Caltech/GSFC | + Expand image

Though the spacecraft have left the heliosphere, Voyager 1 and Voyager 2 have not yet left the solar system, and won't be leaving anytime soon. The boundary of the solar system is considered to be beyond the outer edge of the Oort Cloud, a collection of small objects that are still under the influence of the Sun's gravity. The width of the Oort Cloud is not known precisely, but it is estimated to begin at about 1,000 astronomical units from the Sun and extend to about 100,000 AU. (One astronomical unit, or AU, is the distance from the Sun to Earth.) It will take about 300 years for Voyager 2 to reach the inner edge of the Oort Cloud and possibly 30,000 years to fly beyond it. By that time, both Voyager spacecraft will be completely out of the hydrazine fuel used to point them toward Earth (to send and receive data) and their power sources will have decayed beyond their usable lifetime.

Why It’s Important

Since the Voyager spacecraft launched more than 40 years ago, no other NASA missions have encountered as many planets (some of which had never been visited) and continued making science observations from such great distances. Other spacecraft, such as New Horizons and Pioneer 10 and 11, will eventually make it to interstellar space, but we will have no data from them to confirm their arrival or explore the region because their instruments already have or will have shut off by then.

Watch on YouTube

Interstellar space is a region that’s still mysterious because until 2012, when Voyager 1 arrived there, no spacecraft had visited it. Now, data from Voyager 2 will help add to scientists’ growing understanding of the region. Scientists are hoping to continue using Voyager 2’s plasma science instrument to study the properties of the ionized gases, or plasma, that exist in the interstellar medium by making direct measurements of the plasma density and temperature. This new data may shed more light on the evolution of our solar neighborhood and will most certainly provide a window into the exciting unexplored region of interstellar space, improving our understanding of space and our place in it.

As power wanes on Voyager 2, scientists will have to make tough choices about which instruments to keep turned on. Further complicating the situation is the freezing cold temperature at which the spacecraft is currently operating – perilously close to the freezing point of its hydrazine fuel. But for as long as both Voyager spacecraft are able to maintain power and communication, we will continue to learn about the uncharted territory of interstellar space.

Teach It

Use these standards-aligned lessons and related activities to get students doing math and science with a real-world (and space!) connection.

Explore More

TAGS: Teachers, Educators, Science, Engineering, Technology, Solar System, Voyager, Spacecraft, Educator Resources, Lessons, Activities

  • Ota Lutz
READ MORE

Animation showing InSight landing on Mars

Tom Hoffman, InSight Project Manager, NASA JPL, left, and Sue Smrekar, InSight deputy principal investigator, NASA JPL, react after receiving confirmation InSight is safe on the surface of Mars

This is the first image taken by NASA's InSight lander on the surface of Mars.

The Instrument Deployment Camera (IDC), located on the robotic arm of NASA's InSight lander, took this picture of the Martian surface on Nov. 26

UPDATE: Nov. 27, 2018 – The InSight spacecraft successfully touched down on Mars just before noon on Nov. 26, 2018, marking the eighth time NASA has succeeded in landing a spacecraft on the Red Planet. This story has been updated to reflect the current mission status. For more mission updates, follow along on the InSight Mission Blog, JPL News, as well as Facebook and Twitter (@NASAInSight, @NASAJPL and @NASA).


In the News

NASA’s newest mission to Mars, the InSight lander, touched down just before noon PST on Nov. 26. So while some people were looking for Cyber Monday deals, scientists and engineers at NASA’s Jet Propulsion Laboratory were monitoring their screens for something else: signals from the spacecraft that it successfully touched down on the Red Planet.

InSight spent nearly seven months in space, kicked off by the first interplanetary launch from the West Coast of the U.S. Once it arrived at the Red Planet, InSight had to perform its entry, descent and landing, or EDL, to safely touch down on the Martian surface. This was perhaps the most dangerous part of the entire mission because it required that the spacecraft withstand temperatures near 1,500 degrees Fahrenheit, quickly put on its brakes by using the atmosphere to slow down, then release a supersonic parachute and finally lower itself to the surface using 12 retrorockets.

When NASA’s InSight descends to the Red Planet on Nov. 26, 2018, it is guaranteed to be a white-knuckle event. Rob Manning, chief engineer at NASA’s Jet Propulsion Laboratory, explains the critical steps that must happen in perfect sequence to get the robotic lander safely to the surface. | Watch on YouTube

But even after that harrowing trip to the surface, InSight will have to overcome one more challenge before it can get to the most important part of the mission, the science. After a thorough survey of its landing area, InSight will need to carefully deploy each of its science instruments to the surface of Mars. It may sound like an easy task, but it’s one that requires precision and patience.

It’s also a great opportunity for educators to engage students in NASA’s exploration of Mars and the importance of planetary science while making real-world connections to lessons in science, coding and engineering. Read on to find out how.

How It Works: Deploying InSight’s Instruments

InSight is equipped with three science investigations with which to study the deep interior of Mars for the first time. The Seismic Experiment for Interior Structures, or SEIS, is a seismometer that will record seismic waves traveling through the interior of Mars.

These waves can be created by marsquakes, or even meteorites striking the surface. The Heat Flow and Physical Properties Package, or HP3, will investigate how much heat is still flowing out of Mars. It will do so by hammering a probe down to a depth of up to 16 feet (about 5 meters) underground. The Rotation and Interior Structure Experiment, or RISE, will use InSight’s telecommunications system to precisely track the movement of Mars through space. This will shed light on the makeup of Mars’ iron-rich core.

But to start capturing much of that science data, InSight will have to first carefully move the SEIS and HP3 instruments from its stowage area on the lander deck and place them in precise locations on the ground. Among its many firsts, InSight will be the first spacecraft to use a robotic arm to place instruments on the surface of Mars. Even though each instrument will need to be lowered only a little more than three feet (1 meter) to the ground, it’s a delicate maneuver that the team will rehearse to make sure they get it right.

InSight’s robotic arm is nearly 6 feet (about 2 meters) long. At the end of the arm is a five-fingered grappler that is designed to grab SEIS and HP3 from the deck of the lander and place them on the ground in front of the lander in a manner similar to how a claw game grabs prizes and deposits them in the collection chute. But on Mars, it has to work every time.

InSight will be the first mission on another planet to use a robotic arm to grasp instruments and place them on the surface. While it may look like an arcade machine, this space claw is designed to come away with a prize every time. | Watch on YouTube

Before the instruments can be set down, the area where they will be deployed – commonly referred to as the work space – must be assessed so SEIS and HP3 can be positioned in the best possible spots to meet their science goals. InSight is designed to land with the solar panels at an east-west orientation and the robotic arm facing south. The work space covers about three-square meters to the south of the rover. Because InSight is a three-legged lander and not a six-wheeled rover, science and engineering teams must find the best areas to deploy the instruments within the limited work space at InSight’s landing spot. That is why choosing the best landing site (which for InSight means one that is very flat and has few rocks) is so important.

Just as having two eyes gives us the ability to perceive depth, InSight will use a camera on its robotic arm to take what are known as stereo-pair images. These image pairs, made by taking a photo and then moving the camera slightly to the side for another image, provide 3D elevation information that’s used by the science and engineering teams. With this information, they can build terrain maps that show roughness and tilt, and generate something called a goodness map to help identify the best location to place each instrument. Evaluating the work space is expected to take a few weeks.

Once the team has selected the locations where they plan to deploy the instruments, the robotic arm will use its grapple to first grab SEIS and lower it to the surface. When the team confirms that the instrument is on the ground, the grapple will be released and images will be taken. If the team decides they like where the instrument is placed, it will be leveled, and the seismic sensor will be re-centered so it can be calibrated to collect scientific data. If the location is deemed unsuitable, InSight will use its robotic arm to reposition SEIS.

But wait, there’s more! SEIS is sensitive to changes in air pressure, wind and even local magnetic fields. In fact, it is so sensitive that it can detect ground movement as small as half the radius of a hydrogen atom! So that the instrument isn’t affected by the wind and changes in temperature, the robotic arm will have to cover SEIS with the Wind and Thermal Shield.

After SEIS is on the ground and covered by the shield, and the deployment team is satisfied with their placement, the robotic arm will grab the HP3 instrument and place it on the surface. Just as with SEIS, once the team receives confirmation that HP3 is on the ground, the grapple will be released and the stability of the instrument will be confirmed. The final step in deploying the science instruments is to release the HP3 self-hammering mole from within the instrument so that it will be able to drive itself into the ground. The whole process from landing to final deployment is expected to take two to three months.

Why It’s Important

For the science instruments to work – and for the mission to be a success – it’s critical that the instruments are safely deployed. So while sending a mission to another planet is a huge accomplishment and getting pictures of other worlds is inspiring, it’s important to remember that science is the driver behind these missions. As technologies advance, new techniques are discovered and new ideas are formulated. Opportunities arise to explore new worlds and revisit seemingly familiar worlds with new tools.

Using its science instruments, SEIS and HP3, plus the radio-science experiment (RISE) to study how much Mars wobbles as it orbits the Sun, InSight will help scientists look at Mars in a whole new way: from the inside.

SEIS will help scientists understand how tectonically active Mars is today by measuring the power and frequency of marsquakes, and it will also measure how often meteorites impact the surface of Mars.

HP3 and RISE will give scientists the information they need to determine the size of Mars’ core and whether it’s liquid or solid; the thickness and structure of the crust; the structure of the mantle and what it’s made of; and how warm the interior is and how much heat is still flowing through.

Answering these questions is important for understanding Mars, and on a grander scale, it is key to forming a better picture of the formation of our solar system, including Earth.

Teach It

Use these resources to bring the excitement of NASA’s newest Mars mission and the scientific discovery that comes with it into the classroom.

Explore More

Follow Along

Resources and Activities

Feature Stories and Podcasts

Websites and Interactives

TAGS: InSight, Landing, Mars, K-12 Educators, Informal Educators, Engineering, Science, Mission Events

  • Lyle Tavernier
READ MORE

Satellite images of the 2018 Carr and Ferguson wildfires in California

Update – August 8, 2018: This feature, originally published on August 23, 2016, has been updated to include information on 2018 fires and current fire research.

Once again, it’s fire season in the western United States with many citizens finding themselves shrouded in wildfire smoke. Late summer in the west brings heat, low humidity and wind – optimal conditions for fire. These critical conditions have resulted in the Mendocino Complex Fire, the largest fire in California's recorded history. Burning concurrently in California are numerous other wildfires, including the Carr fire, the 12th largest in California history.

Because of their prevalence and effects on a wide population, wildfires will remain a seasonal teachable moment for decades to come. Follow these links to learn about NASA’s fire research and see images of current fires from space. Check out the information and lessons below to help students learn how NASA scientists use technology to monitor and learn about fires and their impacts.


In the News

You didn’t need to check social media, read the newspaper or watch the local news to know that California wildfires were making headlines this summer. Simply looking up at a smoke-filled sky was enough for millions of people in all parts of the state to know there was a fire nearby.

Fueled by high temperatures, low humidity, high winds and five years of vegetation-drying drought, more than 4,800 fires have engulfed 275,000-plus acres across California already this year. And the traditional fire season – the time of year when fires are more likely to start, spread and consume resources – has only just begun.

With wildfires starting earlier in the year and continuing to ignite throughout all seasons, fire season is now a year-round affair not just in California, but also around the world. In fact, the U.S. Forest Service found that fire seasons have grown longer in 25 percent of Earth's vegetation-covered areas.

For NASA's Jet Propulsion Laboratory, which is located in Southern California, the fires cropping up near and far are a constant reminder that its efforts to study wildfires around the world from space, the air and on the ground are as important as ever.

JPL uses a suite of Earth satellites and airborne instruments to help better understand fires and aide in fire management and mitigation. By looking at multiple images and types of data from these instruments, scientists compare what a region looked like before, during and after a fire, as well as how long the area takes to recover.

Animation of the FireSat network of satellites capturing wildfires on Earth

This animation shows how FireSat would use a network of satellites around the Earth to detect fires faster than ever before. | + Expand image

While the fire is burning, scientists watch its behavior from an aerial perspective to get a big-picture view of the fire itself and the air pollution it is generating in the form of smoke filled with carbon monoxide and carbon dioxide.

Natasha Stavros, a wildfire expert at JPL, joined Zach Tane with the U.S. Forest Service during a Facebook Live event (viewable below) to discuss some of these technologies and how they're used to understand wildfire behavior and improve wildfire recovery.

Additionally, JPL is working with a startup in San Francisco called Quadra Pi R2E to develop FireSat, a global network of satellites designed to detect wildfires and alert firefighting crews faster. When completed in June 2018, the network's array of more than 200 satellites will use infrared sensors to detect fires around the world much faster than is possible today. Working 24 hours a day, the satellites will be able to automatically detect fires as small as 35 to 50 feet wide within 15 minutes of when they begin. And within three minutes of a fire being detected, the FireSat network will notify emergency responders in the area.

Using these technologies, NASA scientists are gaining a broader understanding of fires and their impacts.

Why It's Important

One of the ways we often hear wildfires classified is by how much area they have burned. Though this is certainly of some importance, of greater significance to fire scientists is the severity of the fire. Wildfires are classified as burning at different levels of severity: low, medium, and high. Severity is a function of intensity, or how hot the fire was, and its spread rate, or the speed at which it travels. A high-severity fire is going to do some real damage. (Severity is measured by the damage left after the fire, but can be estimated during a fire event by calculating spread rate and measuring flame height which indicates intensity.)

Google Earth image showing fire severity
This image, created using data imported into Google Earth, shows the severity of the 2014 King Fire. Green areas are unchanged by the fire; yellow equals low severity; orange equals moderate severity; and red equals high severity. A KMZ file with this data is available in the Fired Up Over Math lesson linked below. Credit: NASA/JPL-Caltech/E. Natasha Stavros.

The impacts of wildfires range from the immediate and tangible to the delayed and less obvious. The potential for loss of life, property and natural areas is one of the first threats that wildfires pose. From a financial standpoint, fires can lead to a downturn in local economies due to loss of tourism and business, high costs related to infrastructure restoration, and impacts to federal and state budgets.

The release of greenhouse gases like carbon dioxide and carbon monoxide is also an important consideration when thinking about the impacts of wildfires. Using NASA satellite data, researchers at the University of California, Berkeley, determined that between 2001 and 2010, California wildfires emitted about 46 million tons of carbon, around five to seven percent of all carbon emitted by the state during that time period.

Animation showing Carbon Dioxide levels rising from the Station Fire in Southern California.
This animation from NASA's Eyes on the Earth visualization program shows carbon monoxide rising (red is the highest concentration) around Southern California as the Station Fire engulfed the area near JPL in 2009. Image credit: NASA/JPL-Caltech

In California and the western United States, longer fire seasons are linked to changes in spring rains, vapor pressure and snowmelt – all of which have been connected to climate change. Wildfires serve as a climate feedback loop, meaning certain effects of wildfires – the release of CO2 and CO – contribute to climate change, thereby enhancing the factors that contribute to longer and stronger fire seasons.

While this may seem like a grim outlook, it’s worth noting that California forests still act as carbon sinks – natural environments that are capable of absorbing carbon dioxide from the atmosphere. In certain parts of the state, each hectare of redwood forest is able to store the annual greenhouse gas output of 500 Americans.

Studying and managing wildfires is important for maintaining resources, protecting people, properties and ecosystems, and reducing air pollution, which is why JPL, NASA and other agencies are continuing their study of these threats and developing technologies to better understand them.

Teach It

Have your students try their hands at solving some of the same fire-science problems that NASA scientists do with these two lessons that get students in grades 3 through 12 using NASA data, algebra and geometry to approximate burn areas, fire-spread rate and fire intensity:

Explore More


Lyle Tavernier was a co-author on this feature.

TAGS: teachable moments, wildfires, science

  • Ota Lutz
READ MORE