Tannat Cable: 21 Days of Drift, Argentina to Brazil, 25ms to 506ms
On April 1, 2026, the Tannat submarine cable was carrying packets between Las Toninas, Argentina and Santos, Brazil at a steady 25.05 milliseconds. Twenty samples taken over the first ten days of April clustered between 25.00 and 25.27ms — about as clean as a 2,000-kilometre submarine path can run. We even said so in our March-April health recap: Tannat appeared on the watch list with a handful of mild events but otherwise read as a textbook well-utilised, non-congested cable.
Then on April 17 something changed. Our first alert fired at 11:02 UTC. By May 7 the same Las Toninas-Santos path was reading 506 milliseconds — twenty times its physics floor — and the rolling baseline had been climbing without interruption for three weeks. No maintenance notice, no public outage report, no neighbouring cable showing the same pattern. This is what an opaque submarine-cable incident looks like when you only have probe-level visibility: a slow, deliberate drift across a cable that everyone else thinks is fine.
The 21-day timeline
Twelve alerts crossed our threshold between April 17 and May 7. Eight were classified critical, four were warnings. Each row below is one event our threshold-based detector caught — the rolling baseline column is what the cable looked like over the preceding 24-hour window when the spike was measured, not the spike itself.
| Alert | Detected (UTC) | Severity | Rolling baseline | Spike RTT | vs target |
|---|---|---|---|---|---|
| 2026041704 | Apr 17, 11:02 | critical | 81.7 ms | 27.0 ms | +388.8% |
| 2026042801 | Apr 28, 05:01 | critical | 156.1 ms | 125.8 ms | +293.8% |
| 2026042901 | Apr 29, 09:01 | critical | 189.0 ms | 27.1 ms | +657.7% |
| 2026043001 | Apr 30, 13:01 | warning | 207.5 ms | 27.3 ms | +203.8% |
| 2026043005 | Apr 30, 23:01 | critical | 249.2 ms | 232.4 ms | +1166.8% |
| 2026050201 | May 2, 03:01 | critical | 290.0 ms | 27.2 ms | +853.7% |
| 2026050202 | May 2, 11:01 | critical | 341.1 ms | 26.9 ms | +938.6% |
| 2026050301 | May 3, 17:01 | critical | 433.1 ms | 27.0 ms | +302.5% |
| 2026050401 | May 4, 03:01 | critical | 456.9 ms | 26.8 ms | +359.9% |
| 2026050601 | May 6, 11:01 | warning | 469.9 ms | 27.0 ms | +274.9% |
| 2026050701 | May 7, 07:01 | warning | 487.8 ms | 26.7 ms | +231.0% |
| 2026050702 | May 7, 13:01 | warning | 506.5 ms | 26.6 ms | +180.6% |
Two things in this table deserve attention before we go further.
First, the spike RTT column — the actual latency our probe measured when the alert fired — sits at 26 to 27 milliseconds in eleven of the twelve events. The one outlier, on April 30 at 23:01, was a 232ms hit; another on April 28 hit 126ms. Otherwise the path is reading exactly what it should read: the great-circle 2,000-kilometre Argentina-Brazil distance, plus a small overhead for repeaters and landing-station termination, comes out around 25ms, and that is the number our probe is seeing during the spikes.
Second, the rolling baseline column climbs monotonically. April 17: 82ms. April 28: 156ms. April 30: 249ms. May 4: 457ms. May 7: 506ms. The baseline is the 24-hour median latency seen on this cable from all monitoring sources — global measurement aggregators plus our own probes — and it has been moving in one direction for three weeks. Twenty-five milliseconds at the start of April. Five hundred and six milliseconds yesterday afternoon.
Reading the table backwards
If the spike RTT is normal but the baseline is high, what is actually happening is that most of the time, traffic on the Las Toninas-Santos path is running well above 25ms — and our threshold-based alerting only fires when a single measurement falls back to normal sharply enough to register as an anomaly against the rolling median. We are watching a cable whose normal state has shifted, and the alert system is, paradoxically, catching the moments when it briefly works the way it used to.
This is unusual. A cable that is partially down typically produces high spikes against a stable baseline (think of a fibre cut where the protection switch is slow). Tannat is doing the opposite: an elevated baseline with occasional dips back to physics-floor performance. The most parsimonious explanation is that the cable is up but heavily reconfigured — the bulk of traffic is taking a long detour through some secondary path, while a thin slice of traffic still finds the original route at moments when our probes happen to land.
The detour pattern matches what we would expect from a partial cable degradation handled by carrier-side traffic engineering. Google or Antel — the cable's two owners — could be steering most flows away from a particular fibre pair or wavelength while leaving a fraction on the original capacity for diagnostic purposes. Or the cable could be running at sharply reduced spectral efficiency on some pairs, forcing traffic onto longer alternative routes that happen to converge on Las Toninas and Santos at higher hop counts. We cannot see which, because submarine-cable owners do not publish that telemetry. We can only watch what the probes see.
The hop-count signal
One thing the probes do see: hop counts. Our RIPE Atlas measurements between Argentina and Brazil over the same window show the typical traceroute path lengthening over the incident. From an average of 13 hops on April 8 — the closest pre-incident sample we have — to 16.9 hops on May 8, with individual paths going as long as 18 hops. That is consistent with packets being shunted onto a more roundabout route through additional intermediate networks. A direct cable run from Buenos Aires to São Paulo on Tannat ought to need somewhere between 9 and 13 hops end-to-end depending on the exit network. Eighteen hops is a different path.
| Date | Avg hops AR↔BR | Min | Max | Samples |
|---|---|---|---|---|
| April 8 (pre-incident) | 17.0 | 17 | 17 | 1 |
| May 7 | 13.0 | 13 | 13 | 1 |
| May 8 | 16.9 | 13 | 18 | 9 |
The pre-incident sample size is small enough that we are cautious about reading too much into it; this is not a smoking gun. But the wide hop-count range on May 8 — 13 to 18 across nine simultaneous measurements — tells us that traffic between the two countries is genuinely running on multiple distinct paths rather than a single primary cable. That is what end-user latency looks like when a cable's traffic distribution has been quietly fragmented.
Neighbouring cables, clean
The narrowness of the event is what makes it worth writing about. Argentina, Uruguay, and Brazil are connected by more than just Tannat. Monet, Junior, ARBR, SAm-1, Malbec, Unisur, Firmina, Bicentenario, Americas-2, and Atlantis-2 all touch at least one of the three countries. We checked every one of them for alerts in the same 30-day window. Nothing. No critical events, no warnings, no rolling-baseline drift on any of those cables.
This rules out a regional event — there is no environmental or geopolitical cause that should hit one cable on the AR-BR corridor and not its neighbours. Whatever happened on April 17 was specific to Tannat: a single 2,000-kilometre Argentina-Uruguay-Brazil cable, owned by Google and Antel, started behaving differently while every other cable on the same corridor continued to behave normally. The asymmetry is the story.
It is also worth noting that Firmina — Google's much larger 14,517-kilometre Argentina-to-US cable that came into service in 2026 — has been running clean throughout. Firmina is the cable Google built explicitly to reduce its dependence on Tannat for inter-region traffic out of South America. If Tannat is in a degraded state, the existence of Firmina removes most of the operational pressure to fix it quickly. Google can run South American egress on Firmina without Tannat being healthy. That changes the urgency calculus and may explain why we are still seeing a multi-week drift rather than a sharp repair-and-recover pattern.
What Tannat is, and where it lands
Tannat is a 2,000-kilometre cable connecting three landings — Las Toninas in Argentina, Maldonado in Uruguay, and Santos in Brazil — built by Google and Uruguay's state-owned Antel and brought into service in 2018. It carries six fibre pairs at 90 Tbps design capacity. The Las Toninas-Santos trunk is the cable's primary backbone; the Maldonado branch gives Uruguay direct international connectivity without depending on transit through Argentine or Brazilian carriers.
| Specification | Value |
|---|---|
| Length | 2,000 km |
| Ready for service | 2018 |
| Fibre pairs | 6 |
| Design capacity | 90 Tbps |
| Owners | Google, Antel (Uruguay) |
| Landings | Las Toninas (AR), Maldonado (UY), Santos (BR) |
| Pre-incident RTT (Apr 1) | 25.05 ms |
| Current rolling baseline (May 7) | 506.5 ms |
For the last seven years Tannat has been the workhorse of Google's South American footprint — connecting São Paulo cloud workloads to Buenos Aires users without the long detour through Miami or Madrid. With Firmina live as of 2026, Tannat's strategic role has shrunk. It is no longer the only Google-owned path off the continent; it is now best understood as a regional intra-South-America cable, useful for Buenos Aires-São Paulo traffic and for Antel's Uruguayan international needs, but no longer the only line out.
That demotion may matter for what happens next. A cable that carries the bulk of a hyperscaler's regional egress gets fixed fast. A cable that has been quietly demoted to a backup or specialised role can be allowed to drift while traffic is rerouted onto its replacement. Three weeks of degradation without a public maintenance notice fits the second pattern better than the first.
What we cannot see
We are honest about the limits of probe-level visibility. We do not know whether Tannat has a physical fault on the seabed, a controlled maintenance event with traffic deliberately steered off, a partial fibre-pair failure being papered over with rerouting, or an opaque commercial decision by Google to consolidate traffic on Firmina ahead of schedule. We cannot inspect the cable's optical equipment, read its alarm logs, or talk to the marine engineers responsible for it. What we can do is record what end-user latency looks like during the event and timestamp it precisely.
Public outage trackers (the ones that aggregate news reports of submarine-cable incidents) have not flagged Tannat. The Submarine Cable Map shows it as in service. There is no Reuters story, no BleepingComputer post, no official Google or Antel maintenance bulletin we can find. Whatever is happening is happening below the threshold of public reporting — which is precisely the gap our monitoring is built to fill.
Our measurements, three weeks in
For the entire 30-day window before May 8, the Las Toninas-Santos direction has produced 366 cable-health-check samples. The May 8 daily aggregate has reverted toward something more reasonable: average RTT 266ms across the day so far, no anomalies fired, ratio dropped to 0.59. One day is too short to call a recovery. But it is the first day in three weeks where the baseline appears to be moving down rather than up.
| Day | Samples | Avg baseline | Ratio | Anomalies |
|---|---|---|---|---|
| April 28 | 43 | 193 ms | 1.07 | 1 |
| April 30 | 38 | 207 ms | 1.68 | 2 |
| May 2 | 68 | 270 ms | 1.66 | 8 |
| May 4 | 26 | 336 ms | 1.36 | 3 |
| May 6 | 33 | 289 ms | 1.26 | 2 |
| May 7 | 54 | 348 ms | 1.21 | 3 |
| May 8 (so far) | 10 | 267 ms | 0.59 | 0 |
If May 8 holds and the trend continues, we are looking at the trailing edge of a three-week event with the baseline now retreating. If May 9 prints another high baseline, the incident is still active. We will know in 48 hours.
What to watch next
The recovery indicator is simple: rolling baseline back below 50ms on the Las Toninas-Santos direction, sustained for 48 hours. That is the first signal that whatever was redirecting traffic away from Tannat's primary path has been undone. A return to 25ms — the pre-incident floor — would confirm a full recovery of the cable's capacity profile, but on a cable of this age, drifting to a slightly higher new baseline (say 35-40ms) after an incident is not unusual either, because traffic engineering decisions made during a degradation sometimes stay in place once they are working.
The cable's live monitoring page updates every two hours with our latest measurements. The baseline number on that page is the one to watch. If you are reading this article and the page is showing 25-30ms, the event is over. If it is showing something north of 100ms, the rerouting is still in place.
Why this kind of monitoring matters
A cable that breaks loudly — a fibre cut from a fishing trawler, an earthquake offshore, a typhoon damaging a landing station — gets reported within hours. Repair ships sail; news stories appear; carriers issue maintenance bulletins. A cable that drifts quietly — its baseline crawling up over weeks, traffic shunted onto longer paths in the background, no public incident filed — gets noticed by almost nobody. That is the gap. Most public submarine-cable monitoring tools track which cables are listed as in-service or under repair according to their owners' own status pages. They cannot tell you what end-user latency actually looks like on the wire, day by day, between specific cities.
Our approach is different: probes, baselines, anomaly detection, and a public record. The Tannat event is the kind of three-week-long quiet incident that simply is not visible without that approach. The cable is up. The map says it is fine. The latency tells a different story.
Try it yourself
Live latency on the Tannat cable page, refreshed every two hours. For broader context on the same fleet, see our March-April 2026 health recap, where Tannat appeared on the watch list with five alerts before the drift fully accelerated. For comparison cables on the same Argentina-Brazil corridor that have stayed clean, see Firmina (Google's 14,517-km AR-US cable, 2026) and Monet (Brazil-USA, 2017). Our cable monitoring runs 24/7 from probes in Jerusalem, Minsk, Tbilisi, Almaty, and Sevastopol; the data behind every chart on this site is queryable through the public cables index and the submarine cable map.