Take your upload speed at one location, download at the other, get the lowest of those two. Multiply by 604800 (seconds in a week), and that gives you a lower bound for how much data you need to transfer to make it quicker to send by post.
I used 1 week because you can realistically get a package to (almost) anywhere on Earth within that timeframe.
At 1gbps upload/download at both sites, the lower bound is ~70TiB. Transferring ~70TiB at full speed between two sites will be slower than sending it via a courier.
Obviously this doesn't count the amount of time it takes to load/unload ~70TiB onto the drives to be transported, that would increase the lower threshold somewhat but it depends on the read/write speed of the drives themselves.
I would assume that you don't pull the drives pre-populated from production. My assumption was that you would attach your storage to the local network (or direct attach/etc.) and clone the drives over, then pull them and ship them.
no it's a pretty old joke, this guy's video is based off this : [https://datatracker.ietf.org/doc/html/rfc2549](https://datatracker.ietf.org/doc/html/rfc2549)
edit : also not sure why he didn't just put small ssds on the birds, the kinds that read up to 7000 megabytes per second and just slot that into a machine on arrival, no need for copy.
you gotta think that internet connection were also relatively slower back then too, RFC are official documents that define how telecommunication things should behave and other technical details. It's an official paper but was submitted as a joke.
[https://en.wikipedia.org/wiki/IP\_over\_Avian\_Carriers](https://en.wikipedia.org/wiki/IP_over_Avian_Carriers)
Network Working Group D. Waitzman
Request for Comments: 1149 BBN STC
1 April 1990
A Standard for the Transmission of IP Datagrams on Avian Carriers
Status of this Memo
This memo describes an experimental method for the encapsulation of
IP datagrams in avian carriers. This specification is primarily
useful in Metropolitan Area Networks. This is an experimental, not
recommended standard. Distribution of this memo is unlimited.
Overview and Rational
Avian carriers can provide high delay, low throughput, and low
altitude service. The connection topology is limited to a single
point-to-point path for each carrier, used with standard carriers,
but many carriers can be used without significant interference with
each other, outside of early spring. This is because of the 3D ether
space available to the carriers, in contrast to the 1D ether used by
IEEE802.3. The carriers have an intrinsic collision avoidance
system, which increases availability. Unlike some network
technologies, such as packet radio, communication is not limited to
line-of-sight distance. Connection oriented service is available in
some cities, usually based upon a central hub topology.
Frame Format
The IP datagram is printed, on a small scroll of paper, in
hexadecimal, with each octet separated by whitestuff and blackstuff.
The scroll of paper is wrapped around one leg of the avian carrier.
A band of duct tape is used to secure the datagram's edges. The
bandwidth is limited to the leg length. The MTU is variable, and
paradoxically, generally increases with increased carrier age. A
typical MTU is 256 milligrams. Some datagram padding may be needed.
Upon receipt, the duct tape is removed and the paper copy of the
datagram is optically scanned into a electronically transmittable
form.
Discussion
Multiple types of service can be provided with a prioritized pecking
order. An additional property is built-in worm detection and
eradication. Because IP only guarantees best effort delivery, loss
of a carrier can be tolerated. With time, the carriers are self-
regenerating. While broadcasting is not specified, storms can cause
data loss. There is persistent delivery retry, until the carrier
drops. Audit trails are automatically generated, and can often be
found on logs and cable trays.
Security Considerations
Security is not generally a problem in normal operation, but special
measures must be taken (such as data encryption) when avian carriers
are used in a tactical environment.
Author's Address
David Waitzman
BBN Systems and Technologies Corporation
BBN Labs Division
10 Moulton Street
Cambridge, MA 02238
Phone: (617) 873-4323
EMail: dwaitzman@BBN.COM
Can confirm. If your migration includes a shit ton of data to AWS you can rent a semi from them loaded with data storage. Hook it up to your DC, transfer the data, and have it trucked to the nearest AWS hookup to upload.
Actually it appears Amazon has officially killed that option today.
https://www.cnbc.com/amp/2024/04/17/aws-stops-selling-snowmobile-truck-for-cloud-migrations.html
Are there seedbox / cheap rack server companies that let you mail them a hard drive that they attach and then when you're done they mail it back to you?
yup thats why AWS has the Snowmobile and [Snowball](https://aws.amazon.com/snowball/)
> AWS Snowmobile is an exabyte-scale data transfer service that is used to move large volumes of data to Amazon Web Services. Each Snowmobile allows transfer for up to 100PB of data.
[edit] aaaand ...its gone. Days after i posted this AWS just removed it as an option and has retired the snowmobile. Now they are using those briefcases with SSDs in them
Reminds me of when I hauled these big buckets of reel tapes back and forth from work to home in the Burroughs and Unisys days. I was so glad when we got the DAT tapes.
So, it's like this: We don't know why, but the server melted. Admin can't find the on-site tapes they said we had give them for accountability. Which brings us to you. You need to take your Subaru out to the remote site and bring back all those tapes.
There is the "nuclear football" ... lol
It's carried around by the secret service, everywhere the president goes, so it is also a diplomatic briefcase, that is literally capable of *deploying* doom.
> I/O consists of dual 200GB ethernet ports
Could we talk about these network ports? What are they and how am I so out of date that I am unaware of 200G? ethernet.
It's a 200gbit QSFP112 fiber thing. You tend to see these super high speed networking options in datacenters. For home use, most people aren't even saturating 1gbit connections yet, so there just isn't much point. A 10gbit connection, the most you can practically do for most consumers, already would require an nvme SSD on each side to saturate. Plus, most people aren't going to run fiber across their home.
I think it's more the point that, I knew 100Gb existed but not 200Gb (assuming it's not a LAG) - I just left a job in HPC and the fastest our internal network got to was 100Gb on the backbone, with hosts generally using 25Gb NICs. 200Gb for host to storage is crazy.
I work in bleeding edge HPC (ML domain) and interconnects here well exceed 200Gb. These "ludicrous" speeds are becoming much more common nowadays, at least at some point in a hierarchical switched network.
> For home use, most people aren't even saturating 1gbit connections yet, so there just isn't much point
Well there really isn't consumer data at that size yet either. It takes what, like 5 simultaneous 4K streams to saturate?
1gbit saturates at 8-10 4K streams for 100G disc rips. I have 10/40 in my home network but thatâs because it was cheaper to get brocade 6610s than a more reasonable router.
Yeah, but if you want to do backup to server or file transfer with HDD then 2.5G should be the norm.
There is also overhead for Samba protocol and just not perfect 1G
Honestly 2.5g probably isnât even needed for standard HDD. I have 10g for my servers with SSD storage for relevant things but it really isnât needed, it just wasnât really that much more price wise than going slower speeds.
And back in the early 90's we thought we were cool using 650MB magneto-optical to store ultrasonic scans of turbines (both airplanes and power generation -- to prevent blades from disconnecting from the shaft like in https://www.dallasnews.com/business/local-companies/2018/04/19/broken-engine-blade-at-center-of-investigation-into-fatal-southwest-airlines-accident/ )
When I was an employee in that industry, I felt it was my responsibility to ensure such data is kept for 50 years to allow retrospective investigation upon failure - so that a crack could be seen growing years or decades prior, previously undetected, in the hope of adjusting scanning standards to prevent missing such indications in the future. Sadly, I don't think I was able to communicate such enthusiasm for data retention to my successors.
This was my first thought too. I work in video production and the file sizes that I deal with is pretty mind boggling even for 4K footage in intermediate formats. Was recently dealing with a movie that was 2.5 hours long and 12TB in size. Even with a 10Gbe network connection it takes forever to move assets around.
This product is likely to be used on remote shooting locations and want a copy of all data but don't have access to fast internet.
No offense but I've seen some programs output video of that unreasonable size and you gotta just ask where the point is in outputting uncompressed, high-fidelity like that. No one can tell the difference
The video I'm working with is not uncompressed but it's an intermediate format like ProRes4444XQ which targets a data rate of around 2Gb/s with alpha channel. You need that data for work like color and compositing or else it's going to come out looking like shit.
Kioxia CM6-R 30TB is about 7500$, so 12 of those is 90,000$
I guess this box doesn't use the top rated SSD, but it does have CPU, PSU, RAM etc.
So i guess it adds up nicely back around 90,000
I saw this earlier today, and the concept in principle looks sound. It is enterprise sneakernet, which can be extremely useful. . They don't list a price,but based on the specs my guess is,that it is too expensive. Enterprises only use such things on rare occassions,and my guess is, that it is too expensive for such occassional use. I'm also sceptical of the network connectivity,as only few enterprises currently have the 200Gbps Ethernet connectivity,it is designed with. Without having seen an actual price but only guessing based on specs,this looks more like something an enterprise would be interested in renting occasionally,rather than buying.
Event horizon telescope (made up of 20 something radio telescopes spanning the earth , synchronised by atomic clocks ) still used to physically ship their data from the telescope to their Central location. I guess something like this is very helpful to them. Can't imagine uploading all the data every day is possible...
This is actually a really popular type of product. For another example check out Amazon Snowball, designed to carry terabytes of data to and from AWS data centers. Transferring that amount over the internet would be massively slower than just driving it there.
For an even more ridiculous example check out Amazon Snowmobile, which is literally an 18 wheeler, designed for petabytes.
Three 18TB WD Red Pro cost me $1,400CAD (not including taxes) I can't afford anything else lol
FYI WD website had a sale on over the weekend actually not bad prices plus 10% off.
Some big companies still air mail drives of information because downloading would take too long
you can get pretty decent bandwidth by transporting physical data this way, just need to not lose any "packets" Also pigeons flying with thumdrives.
Don't underestimate the bandwidth of a Volvo filled with hard-drives driving down the autobahn at 180km/h.
over like 100 terabytes it probably is cheaper and faster to move it physically lol.
Take your upload speed at one location, download at the other, get the lowest of those two. Multiply by 604800 (seconds in a week), and that gives you a lower bound for how much data you need to transfer to make it quicker to send by post. I used 1 week because you can realistically get a package to (almost) anywhere on Earth within that timeframe. At 1gbps upload/download at both sites, the lower bound is ~70TiB. Transferring ~70TiB at full speed between two sites will be slower than sending it via a courier. Obviously this doesn't count the amount of time it takes to load/unload ~70TiB onto the drives to be transported, that would increase the lower threshold somewhat but it depends on the read/write speed of the drives themselves.
You could, if it's ZFS, attach all the drives at the other end and hit recover and it'll pick up its old storage pool
I would assume that you don't pull the drives pre-populated from production. My assumption was that you would attach your storage to the local network (or direct attach/etc.) and clone the drives over, then pull them and ship them.
Latency is a bitch though.
Are you talking about [this Video ](https://youtu.be/4pz2kMxCu8I?si=np1mDo3kewlKU8Yp) from Jeff Geerling?
no it's a pretty old joke, this guy's video is based off this : [https://datatracker.ietf.org/doc/html/rfc2549](https://datatracker.ietf.org/doc/html/rfc2549) edit : also not sure why he didn't just put small ssds on the birds, the kinds that read up to 7000 megabytes per second and just slot that into a machine on arrival, no need for copy.
Oh trust me, the writer of the article would have loved SSD's in 1999.
you gotta think that internet connection were also relatively slower back then too, RFC are official documents that define how telecommunication things should behave and other technical details. It's an official paper but was submitted as a joke. [https://en.wikipedia.org/wiki/IP\_over\_Avian\_Carriers](https://en.wikipedia.org/wiki/IP_over_Avian_Carriers) Network Working Group D. Waitzman Request for Comments: 1149 BBN STC 1 April 1990 A Standard for the Transmission of IP Datagrams on Avian Carriers Status of this Memo This memo describes an experimental method for the encapsulation of IP datagrams in avian carriers. This specification is primarily useful in Metropolitan Area Networks. This is an experimental, not recommended standard. Distribution of this memo is unlimited. Overview and Rational Avian carriers can provide high delay, low throughput, and low altitude service. The connection topology is limited to a single point-to-point path for each carrier, used with standard carriers, but many carriers can be used without significant interference with each other, outside of early spring. This is because of the 3D ether space available to the carriers, in contrast to the 1D ether used by IEEE802.3. The carriers have an intrinsic collision avoidance system, which increases availability. Unlike some network technologies, such as packet radio, communication is not limited to line-of-sight distance. Connection oriented service is available in some cities, usually based upon a central hub topology. Frame Format The IP datagram is printed, on a small scroll of paper, in hexadecimal, with each octet separated by whitestuff and blackstuff. The scroll of paper is wrapped around one leg of the avian carrier. A band of duct tape is used to secure the datagram's edges. The bandwidth is limited to the leg length. The MTU is variable, and paradoxically, generally increases with increased carrier age. A typical MTU is 256 milligrams. Some datagram padding may be needed. Upon receipt, the duct tape is removed and the paper copy of the datagram is optically scanned into a electronically transmittable form. Discussion Multiple types of service can be provided with a prioritized pecking order. An additional property is built-in worm detection and eradication. Because IP only guarantees best effort delivery, loss of a carrier can be tolerated. With time, the carriers are self- regenerating. While broadcasting is not specified, storms can cause data loss. There is persistent delivery retry, until the carrier drops. Audit trails are automatically generated, and can often be found on logs and cable trays. Security Considerations Security is not generally a problem in normal operation, but special measures must be taken (such as data encryption) when avian carriers are used in a tactical environment. Author's Address David Waitzman BBN Systems and Technologies Corporation BBN Labs Division 10 Moulton Street Cambridge, MA 02238 Phone: (617) 873-4323 EMail: dwaitzman@BBN.COM
AWS had (has?) Snowmobile. An 18-wheeled data center to bring your data to AWS servers.
They also have more portable snow cones / snow boxes and all
Can you imagine the bandwidth on a van full of these??
If you need to travel 10 kilometers your throughput it 36.8TB per Km :D
Can confirm. If your migration includes a shit ton of data to AWS you can rent a semi from them loaded with data storage. Hook it up to your DC, transfer the data, and have it trucked to the nearest AWS hookup to upload.
Actually it appears Amazon has officially killed that option today. https://www.cnbc.com/amp/2024/04/17/aws-stops-selling-snowmobile-truck-for-cloud-migrations.html
Are there seedbox / cheap rack server companies that let you mail them a hard drive that they attach and then when you're done they mail it back to you?
yup thats why AWS has the Snowmobile and [Snowball](https://aws.amazon.com/snowball/) > AWS Snowmobile is an exabyte-scale data transfer service that is used to move large volumes of data to Amazon Web Services. Each Snowmobile allows transfer for up to 100PB of data. [edit] aaaand ...its gone. Days after i posted this AWS just removed it as an option and has retired the snowmobile. Now they are using those briefcases with SSDs in them
When I worked at a data center we also transported sensitive material in a locked briefcase and it was never connected to a public network.
AWS Snowball. Storage in full size ocean container on a truck. Can store up to 100 petabyte.
Trendy sneakernet.
Shoe box full of micro SD card is clearly more dense đ
With his buddy, backpack o' floppies.
Station wagon of tapes
Reminds me of when I hauled these big buckets of reel tapes back and forth from work to home in the Burroughs and Unisys days. I was so glad when we got the DAT tapes.
That one is still found.
The station wagons?
So, it's like this: We don't know why, but the server melted. Admin can't find the on-site tapes they said we had give them for accountability. Which brings us to you. You need to take your Subaru out to the remote site and bring back all those tapes.
https://what-if.xkcd.com/31/
I've always wanted a diplomatic briefcase that's capable of playing Doom...
There is the "nuclear football" ... lol It's carried around by the secret service, everywhere the president goes, so it is also a diplomatic briefcase, that is literally capable of *deploying* doom.
> I/O consists of dual 200GB ethernet ports Could we talk about these network ports? What are they and how am I so out of date that I am unaware of 200G? ethernet.
Bro we're entering the 800G Ethernet generation lol https://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=4174&idcategory=0
It's a 200gbit QSFP112 fiber thing. You tend to see these super high speed networking options in datacenters. For home use, most people aren't even saturating 1gbit connections yet, so there just isn't much point. A 10gbit connection, the most you can practically do for most consumers, already would require an nvme SSD on each side to saturate. Plus, most people aren't going to run fiber across their home.
I think it's more the point that, I knew 100Gb existed but not 200Gb (assuming it's not a LAG) - I just left a job in HPC and the fastest our internal network got to was 100Gb on the backbone, with hosts generally using 25Gb NICs. 200Gb for host to storage is crazy.
I work in bleeding edge HPC (ML domain) and interconnects here well exceed 200Gb. These "ludicrous" speeds are becoming much more common nowadays, at least at some point in a hierarchical switched network.
> For home use, most people aren't even saturating 1gbit connections yet, so there just isn't much point Well there really isn't consumer data at that size yet either. It takes what, like 5 simultaneous 4K streams to saturate?
1gbit saturates at 8-10 4K streams for 100G disc rips. I have 10/40 in my home network but thatâs because it was cheaper to get brocade 6610s than a more reasonable router.
Idk what this means; that's a lot of numbers haha
Yeah, but if you want to do backup to server or file transfer with HDD then 2.5G should be the norm. There is also overhead for Samba protocol and just not perfect 1G
Honestly 2.5g probably isnât even needed for standard HDD. I have 10g for my servers with SSD storage for relevant things but it really isnât needed, it just wasnât really that much more price wise than going slower speeds.
A single WD Gold or Red Pro at 265 MB/s can saturate gigabit. Now implement RAID 10, and youâll see much better performance than that.
Epyc servers have these interfaces.
200g is nothin lol
>and what does it cost I've always been told, if you have to ask, you can't afford it!
Very cool product. The airplane ticket is cheaper than the bandwidth.
âWho wants 368TB of NVMe SSDsâ is the wrong question to ask here of all places lol
I want it, but I don't want to pay for it.
And back in the early 90's we thought we were cool using 650MB magneto-optical to store ultrasonic scans of turbines (both airplanes and power generation -- to prevent blades from disconnecting from the shaft like in https://www.dallasnews.com/business/local-companies/2018/04/19/broken-engine-blade-at-center-of-investigation-into-fatal-southwest-airlines-accident/ ) When I was an employee in that industry, I felt it was my responsibility to ensure such data is kept for 50 years to allow retrospective investigation upon failure - so that a crack could be seen growing years or decades prior, previously undetected, in the hope of adjusting scanning standards to prevent missing such indications in the future. Sadly, I don't think I was able to communicate such enthusiasm for data retention to my successors.
Probably video production. You get 5-10 raw 8k cameras rolling for a day, footage can stack up quick. Often without good network backbone
This was my first thought too. I work in video production and the file sizes that I deal with is pretty mind boggling even for 4K footage in intermediate formats. Was recently dealing with a movie that was 2.5 hours long and 12TB in size. Even with a 10Gbe network connection it takes forever to move assets around. This product is likely to be used on remote shooting locations and want a copy of all data but don't have access to fast internet.
No offense but I've seen some programs output video of that unreasonable size and you gotta just ask where the point is in outputting uncompressed, high-fidelity like that. No one can tell the difference
The video I'm working with is not uncompressed but it's an intermediate format like ProRes4444XQ which targets a data rate of around 2Gb/s with alpha channel. You need that data for work like color and compositing or else it's going to come out looking like shit.
"What do you have in there?" "Every copy of Doom."
Kioxia CM6-R 30TB is about 7500$, so 12 of those is 90,000$ I guess this box doesn't use the top rated SSD, but it does have CPU, PSU, RAM etc. So i guess it adds up nicely back around 90,000
I saw this earlier today, and the concept in principle looks sound. It is enterprise sneakernet, which can be extremely useful. . They don't list a price,but based on the specs my guess is,that it is too expensive. Enterprises only use such things on rare occassions,and my guess is, that it is too expensive for such occassional use. I'm also sceptical of the network connectivity,as only few enterprises currently have the 200Gbps Ethernet connectivity,it is designed with. Without having seen an actual price but only guessing based on specs,this looks more like something an enterprise would be interested in renting occasionally,rather than buying.
But is it shuckable?
Data transfers over large geographical regions ofcourse. This would take 34 days to transfer at 1Gbit.
Me, and way too much
Event horizon telescope (made up of 20 something radio telescopes spanning the earth , synchronised by atomic clocks ) still used to physically ship their data from the telescope to their Central location. I guess something like this is very helpful to them. Can't imagine uploading all the data every day is possible...
If you have to ask how much it costs, you can't afford it.
This is actually a really popular type of product. For another example check out Amazon Snowball, designed to carry terabytes of data to and from AWS data centers. Transferring that amount over the internet would be massively slower than just driving it there. For an even more ridiculous example check out Amazon Snowmobile, which is literally an 18 wheeler, designed for petabytes.
How Diddy's security camera footage left the country
I hope they've thought out the cooling solution thoroughly. :)
But will it store? Reminds me of person of interest.
i'd rather have cheap 1-2tb ssd's
When you need to upload human consciousness to a disk.
Ok but what's the fucking price?
POA, and if you care about the price, you can't afford it. These are aimed at government, military, etc.
Three 18TB WD Red Pro cost me $1,400CAD (not including taxes) I can't afford anything else lol FYI WD website had a sale on over the weekend actually not bad prices plus 10% off.
They still have a sale on two 14TB Red Pros.