Lost Maya City Revealed in Guatemala with LiDAR

In northern Guatemala, the jungle still swallows sound. Helicopters pass, cicadas resume their pulse, and then the canopy closes over again. Yet on a scientist’s screen the undergrowth falls away in seconds. Laser pulses sweep the forest and return a clean, bare-earth model. Lines sharpen into streets. Mounds resolve into platforms. A city plan appears where the eye saw only leaves.

That city is part of a much larger pattern. Airborne surveys across the Mirador-Calakmul Karst Basin (MCKB) have mapped hundreds of ancient settlements stitched together by raised causeways. Many belong to the Preclassic era, centuries before the great Classic capitals flourished. The picture that emerges is not a scatter of hamlets but a connected landscape of civic centres, waterworks and engineered fields—substantial, organised, and old.

What LiDAR actually does for jungle archaeology

LiDAR—light detection and ranging—fires rapid laser pulses from a plane or helicopter and measures their return times. Most hits bounce off leaves. Enough reach the ground that, after careful filtering, archaeologists can generate a digital surface stripped of vegetation. From that “bare-earth” model, shaded reliefs bring out ridges, depressions, terraces, dams, roads and masonry platforms. The method is fast, consistent and, crucially, sees through canopy where walking surveys struggle.

In the MCKB, teams processed the data at half-metre resolution and classified returns to separate canopy from soil. The result is a regional map with house-mounds and monumental cores shown together. You can follow a causeway for kilometres without leaving your chair.

Map showing basin boundaries and karst landforms across northern Petén.
Open-access figure illustrating the Mirador-Calakmul basin and karst features that shaped settlement and water management. Source: PLOS ONE (CC BY 4.0).

A city plan under the leaves

Take one of the newly mapped centres in the basin. At its heart stand triadic complexes—three pyramidal mounds set on a shared platform—arranged to command broad plazas. Nearby, an E-Group frames the horizon for solar observation. Around these, house-mounds sprawl in ordered clusters, separated by lanes and drainage. On the margins, earthworks manage water: channels, berms, artificial basins. From this core, a raised white road—the sacbe—runs straight towards a neighbour, linking communities that traded, intermarried and shared ceremonies.

These forms belong to a Preclassic world that had already mastered scale. The architecture speaks of pooled labour, formal leadership, and a calendar that set the tempo of work and worship. The roads speak of coordination beyond a single town. And the waterworks remind us that ingenuity often begins with the unglamorous problem of storage and flow.

LiDAR visualisation of El Mirador’s civic core with triadic complexes and La Danta sector.
Open-access LiDAR images of El Mirador’s core, showing triadic architecture and the La Danta complex. Source: Ancient Mesoamerica (CC BY 4.0).

From “isolated ruins” to a connected lowland

For much of the twentieth century, lowland Maya sites were treated as islands in a green sea. LiDAR has forced a different metaphor. The mapped landscape looks like a web: nodes of architecture tied together by arteries of stone. Survey after survey shows the same grammar—plazas, triads, E-Groups, dams—repeated across districts. In northern Guatemala alone, researchers condensed hundreds of settlement clusters into more than four hundred cities, towns and villages, linked by over one hundred and seventy kilometres of raised roads.

Numbers matter here not as trivia but as a check on imagination. The volumetrics of platforms and pyramids imply massive quantities of fill. The length and width of causeways indicate design standards. The capacity of reservoirs points to dry-season planning. When you aggregate these across a basin, you get the outline of administration: who could command labour, how far authority reached, and where borders hardened.

How researchers verify a “discovery” made from the air

LiDAR does not excuse spadework. It directs it. Teams test the models on the ground: a line on a hillshade becomes a trench across a suspected wall; a bright mound on a slope becomes a test pit in household fill; a rectilinear depression becomes a cut through a reservoir berm. Sherds secure dates. Floors and sealing layers confirm sequences. As seasons pass, the picture from the air is checked, corrected and extended by stratigraphy and artefacts.

This back-and-forth matters because jungle relief can mislead. Roots mimic lines. Erosion softens corners. Looters leave scars that mimic cuts. A careful workflow—air, then ground, then air again—keeps enthusiasm honest and lets researchers scale from a promising cluster to a reliable map of an entire district.

Why a Preclassic city in Petén matters now

It changes the timeline. The Preclassic, once painted as a long prelude, now reads as a period of fast innovation and regional coordination. Monumental buildings and formal planning appear earlier than textbooks implied. Causeways show that planners thought beyond a single centre. Water systems show that climate risk was managed in stone and soil.

It also shifts the conversation about population. Dense settlement around civic cores, plus intensive terracing and bajos converted to fields, point to far more people on the landscape than older models allowed. That, in turn, reframes debates about sustainability, forest use, and the kinds of political institutions capable of organising work at that scale.

Streets, reservoirs, calendars: the texture of daily life

Walk the model with a human in mind and the abstractions thin. A passer-by would feel the rise from domestic lanes to the plaza’s open floor. He would see stairways that coaxed a slow procession to the temple top. He would cross a causeway raised just enough to keep feet dry after rain. The city’s plan choreographed movement, time and attention.

None of this denies change. Some centres grew, paused and grew again. Others faded quietly. LiDAR preserves both—the grandeur and the quiet endings—because it maps the bones of layout rather than the glamour of painted plaster. Even worn-down mounds keep their form when the forest forgets their colour.

Method in brief: how the numbers stack up

Survey flights covered a broad swath of northern Petén at high density, delivering ground returns accurate to decimetres. Analysts filtered the clouds with open and proprietary tools and produced digital elevation models at 0.5-metre resolution. From those, they measured platforms, traced roadbeds, calculated reservoir capacities and compared architectural formats across sites. The regional synthesis condensed hundreds of clusters into tiered site hierarchies and tallied causeway lengths to show how centres related and where corridors of traffic likely ran.

Those figures come with caveats. A DEM does not give a date. A mound’s volume hints at labour yet does not tell you who paid, who directed, or how long a season lasted. Even so, when you combine the models with excavations and ceramics, a strong outline appears: large, early cities linked by engineered roads and water systems, all working within a karstic basin that both constrained and enabled growth.

Conservation stakes in a mapped landscape

Maps are not neutral. They guide both research and policy. In the Maya Biosphere Reserve, a good LiDAR layer helps rangers and communities plan patrols, route trails and rank threats. It also sharpens debates about development: where roads should not go, which wetlands should remain intact, and how tourism can be channelled to avoid fragile architecture.

It also brings local history closer to those who live with it. Communities across Petén already carry the heritage of the region in language, craft and memory. A model that shows the city under the leaves is not just a tool for scholars; it is a prompt for schools, guides and regional museums.

Forest canopy seen from the summit of La Danta at El Mirador.
Vista across Petén’s canopy from La Danta, linking field experience to the LiDAR model. Source: Wikimedia Commons (CC BY 2.0).

What sets this “lost city” apart

Every centre has quirks. One may favour triadic groups arranged in a chain. Another may build its reservoir system as nested basins. A third may run a sacbe arrow-straight through bajos that flood in the rains. In the new maps, these preferences become comparable. Planners copied, adapted and sometimes over-ruled local terrain to get the effect they wanted. Over time, styles converge and diverge in waves you can see from the air.

The Guatemalan basin is especially telling because many of its centres are early. We are looking at experiments close to the start of a tradition. That makes the evidence precious. It shows how quickly complexity took hold and how far cooperation extended before the Classic period’s famous dynasties.

Field seasons that follow the pixels

Once the models are in hand, a season on the ground works like a checklist. Teams cut narrow transects to confirm a wall here, a stair there. They core reservoir floors to find silts, pollen and charcoal. They sample house-mounds to build a picture of diet and craft. They trace causeways at ground level to record paving and alignments. Step by step, the air’s big picture acquires texture: dates, materials, repairs, even episodes of deliberate demolition.

Because the data covers such wide areas, archaeologists can also test ideas at regional scale. Do E-Groups appear first in particular corners of the basin? Do triadic complexes cluster near wetlands? Does causeway width correlate with the size of civic cores? Questions that once required decades of foot survey can now be posed, and partly answered, inside a single project window—and then refined on the ground.

Dense rainforest canopy around Tikal in northern Guatemala.
The kind of canopy LiDAR penetrates to reveal roads, terraces and reservoirs. Source: Wikimedia Commons (CC BY).

What this means for visitors and the region

Visitors who make the trek to El Mirador, Nakbe or El Tintal meet two landscapes at once. There are the platforms, stairs and stelae in the heat and shade. And there is the ghostly second city, the one on a tablet, where every subtle rise is traced and every terrace line is clear. Guides increasingly carry both worlds. They can point from the screen to the horizon and back again. It makes the past less abstract and the present more anchored.

For the region, a clear map can support better infrastructure decisions and stronger protection for cultural and natural resources. It can also spread attention. Lesser-known sites gain visibility when they appear on the same network map as the giants. That, in turn, can help distribute tourism and research time more evenly, easing pressure on famous cores while widening the story told to visitors.

Key takeaways at a glance

LiDAR has given northern Guatemala a coherent archaeological map. It shows early, complex cities linked by engineered roads and water systems. It resets assumptions about when large-scale planning began. And it provides a practical tool for conservation and community projects today. The “lost city” is no longer a rumour; it is a plan you can scroll, measure and then walk.

AI Deciphers a 2,000-Year-Old Vesuvius Scroll

Two thousand years ago a library was buried in heat and ash. Shelves collapsed. Scrolls became charcoal. Generations later, the same pages are whispering again. The breakthrough did not come from scalpels or glue. It arrived through X-rays, code, and a determined global effort to find letters where the human eye sees none. When people say “AI has deciphered a 2,000-year-old scroll burned in the Vesuvius eruption,” they mean this: algorithms trained on carbon ink are now mapping invisible characters inside sealed papyrus rolls, and scholars are beginning to read actual passages instead of guessing at shadows.

It sounds like a fable. It is also a careful sequence of scans, models, and checks. First, a micro-CT machine records the internal layers of a scroll without touching it. Next, a pipeline called virtual unwrapping models those layers as surfaces. Then, machine learning looks for the subtle texture change that ink leaves in the X-ray volume. Finally, papyrologists confirm the results, letter by letter, against the habits of Greek handwriting and known vocabulary. Each step matters. Together, they turn a charred log back into a book.

What was found, and why it matters now

The Herculaneum papyri make up the only intact library to survive from the ancient world. The collection sits at the Villa of the Papyri near modern Ercolano, a place hit by intense heat when Vesuvius erupted in 79 CE. For centuries the rolls were too fragile to open. Some were destroyed by early attempts to slice and peel. Others broke into flakes. A few lines were saved, yet the core remained silent. That silence has begun to lift. Recent competitions and collaborations have revealed entire columns of Greek text from inside sealed scrolls, including discussions tied to the Epicurean philosopher Philodemus. The texts are not mere curiosities. They comment on pleasure, perception, music, and taste. They show a literary voice mid-argument, not a museum label frozen in amber.

Crucially, these readings are not one-off miracles. New scans, new models, and new training sets continue to push the percentage of readable text upward. That scale changes how historians plan. Instead of hoping for a line or two, teams prepare to confront chapters. With that, the tone of the field shifts from rescue to research. What once felt like salvage now looks like the start of a new workflow for long-buried writing.

How the reading actually works

Here is the practical chain. A scroll is imaged at very high resolution using micro-CT or phase-contrast CT. The resulting volume shows layers folded, buckled, and fused. Software identifies surfaces, unwraps them virtually, and lays them flat without tearing a single fibre. On those flattened patches, machine-learning models scan for the signature of carbon ink. That signal is faint. Ink and papyrus are both carbon-rich. Yet they behave differently in X-rays and in the geometry of the fibres. Algorithms trained on labelled fragments learn the difference and mark the likely strokes. Researchers then assemble patches into columns and words. Papyrologists step in to judge where ink is genuine, where artefact, and how letters form syllables. The cycle repeats until a page emerges.

This is not guesswork. It is tested against fragments where the text is already visible. If a model can find the same letters in an X-ray volume that a camera sees on the surface, confidence grows. When three different models point to the same word in the same place, confidence grows further. And when multiple labs can reproduce the result, the reading moves from excitement to evidence.

Why AI was needed

The ink on these rolls is largely carbon. Standard X-ray methods separate materials by how they absorb energy. Carbon on carbon looks like shadow on shadow. The trick was to stop looking for darkness and start looking for texture. Ink lies on top of fibres and subtly changes the surface. In the volume, that leaves a tell-tale pattern machine learning can pick up once it has seen enough examples. In other words, computers learn to see what we cannot. People still make the call, but AI does the first pass at scale and speed.

There is also a social reason. Opening the data brought thousands of minds to the same puzzle. Prize challenges motivated coders, students, and researchers to try segmentation tools, transformer models, and novel loss functions on the same scans. Papyrologists and computer scientists found a common language: does this patch look like ink; can you show it again with a different model; how do we avoid hallucination. The outcome is more robust than a single lab working alone.

Virtually unwrapped view of PHerc. 172 showing columns of Greek text
First image from inside sealed scroll PHerc. 172, produced with high-resolution scanning and AI-assisted analysis. Source: Bodleian Libraries / Vesuvius Challenge

From first words to full passages

Early success arrived as a single Greek word. Soon after, longer phrases appeared. By last year, teams had released images showing columns dense with letters from inside an unopened scroll. Those lines point to a treatise that weighs everyday pleasures—food, fragrance, music—and the senses they stir. Not all words are clear. Not all sentences are complete. But enough of the argument stands to anchor commentary and translation. For the first time, the inner voice of a sealed Herculaneum roll speaks in something like full paragraphs.

That change in scale matters. A stray term might excite headlines; a passage changes scholarship. Passages let scholars cross-reference citations, track terms, and match style with known authors. They also give translators context, which reduces guesswork. A full column stabilises meaning in a way a fragment never can.

Eighteenth-century schematic of a device to unroll carbonised papyri
Historic unrolling machine that damaged many rolls. Virtual methods now avoid physical contact entirely. Source: Wikimedia Commons

What the scans reveal about the scrolls themselves

The volumes show more than letters. They reveal how papyrus sheets were rolled, glued, and repaired. They capture folds, tears, and seams. They map voids where air pockets preserved a curve. They even show how heat changed fibre patterns. That structural information helps restorers, informs conservation, and guides algorithm design. If the model knows a patch lies on a sharply curved fold, it can compensate for distortion before testing for ink.

The scans also highlight the scale of the work ahead. Many rolls are bigger than they look, with dozens of layers packed into a single visible ridge. What looks like one page may be ten. Virtual tools make those pages accessible without a single cut, yet they still require time, compute, and verification. Reading a library remains a marathon, not a sprint.

Where the imaging happens

Several facilities support the effort. University labs handle micro-CT scans and controlled experiments on known fragments. National light sources contribute phase-contrast CT and high-energy imaging. Each instrument adds a piece to the puzzle—resolution here, contrast there, throughput elsewhere. Together they provide the slices, blocks, and beams that virtual unwrapping needs. As the workflow improves, scans become faster and models more accurate. The practical goal is simple: move from a few columns to whole scrolls, then from a handful of scrolls to a shelf.

Progress often depends on patient engineering. Better sample mounts reduce motion. Smarter reconstruction reduces noise. Improved segmentation follows fibre paths more faithfully. These incremental gains look small on a lab note; on a 50-centimetre roll they add up to a readable chapter.

Exterior of the Diamond Light Source synchrotron in Oxfordshire
A modern synchrotron facility used for high-energy X-ray imaging that supports virtual unwrapping workflows. Source: Wikimedia Commons

Checks, balances, and avoiding wishful readings

Because the ink signal is subtle, the community has built guardrails. Teams publish model architectures and validation strategies. Multiple pipelines verify the same patch. Independent reviewers examine whether strokes align with papyrus fibres, whether letter shapes match the script style, and whether vocabulary fits context. When claims survive this scrutiny, confidence grows. When they do not, the images go back in the queue for rework.

That discipline pays off. It keeps excitement honest and prevents a flood of weak “finds” that would erode trust. It also protects the fragile relationship between computer vision and classical philology. Each must respect the other’s strengths. The result is a shared standard: show the patch; explain the model; justify the reading.

Beyond one scroll: a roadmap for a buried library

Reading one roll is proof of concept. Reading a shelf is rescue. The roadmap includes higher-throughput scanning, better layer tracking, semi-automated stitching of segments, and language models tuned to ancient Greek that can suggest but not overrule human readers. The dream extends further. If excavations one day recover deeper rooms at the Villa of the Papyri, a second library may emerge. Should that happen, the tools now maturing will be ready.

In the meantime, the current batch of scrolls is more than enough to occupy teams for years. Each new patch calibrates the next. Each column opens a path for commentary. Each translation anchors a footnote that once seemed fanciful. It is slow, patient work—the kind that leaves a field changed when you look up a decade later.

Artefacts from the Villa of the Papyri displayed in Naples
Artefacts from the Villa of the Papyri. The buried library here preserves texts now being revealed through imaging and AI. Source: Wikimedia Commons

Common questions, answered simply

Is this “AI reading the past” on its own?

No. Models detect likely ink. People read the letters, test interpretations, and argue about syntax, as they should. The partnership works because each side does what it does best.

Are the images edited?

The virtual pages are reconstructions from the scan. Pipelines document each step, from segmentation to flattening to ink detection. Reviewers demand that the same result appear across different models and runs before accepting it as text.

What about errors?

Mistakes happen. Artefacts can mimic strokes. That is why teams cross-validate and publish methods. If a reading fails replication, it is revised or withdrawn. The process is designed to learn in public.

What this changes for classics and history

First, it increases supply. More texts mean broader arguments and fewer gaps in chains of citation. Second, it rescues voices outside the standard canon. Epicurean works dominate the known rolls, but even within that school we may find authors and genres that rarely survive elsewhere. Third, it refreshes method. Philologists now learn to read volumes and patches, not just photographs. Computer scientists learn to think in accents and scribal habits. That cross-training will outlast this project.

Finally, it repositions hope. For years, people spoke about the library as a lost treasure. Now they speak about it as a working archive. The difference is subtle and powerful. A treasure is admired. An archive is read.

Where this leaves the rest of us

If you care about the ancient world, this is good news. If you care about what AI is for, this is a model. It shows technology serving a clear human aim: understanding words left by people who thought hard about how to live. The story also shows how open data and public competitions can accelerate careful research without sacrificing rigour. Headlines come and go. The text on a page does not. Once a line is secure, it will be read for as long as people read Greek.

The old nightmare was that the great books were gone. The new reality is that some of them are back, line by line, with enough clarity to teach, provoke, and delight. That is worth a sober celebration—and a fresh budget line for scanners and servers.