I wish containers hadn't created the abstractions of a registry and an image instead of exposing it all as tar files (which is what it kind of is under the covers) served over a glorified file server. This leads to people assuming there's some kind of magic happening and that the entire process is very arcane, when in reality it's just unpacking tar files.
A .tar file on a file server wouldn't suffice, because you have to store a bunch of metadata for that .tar file. But now you could argue that the metadata could just be a file next to the tar file. True, but container registries are kind of like that + optimization + some useful extra features. If you were to start from a simple file server and think about the stuff you need, then after some iterations you would actually end up with a container registry.
The TUF spec (and PyPI TUF PEPs) explains why a tar over https (with optional DNSSEC, a CA cert bundle, CRL, OCSP,) isn't sufficient for secure software distribution. "#ZeroTrust DevOps"; #DevSecOps
What's the favorite package format with content signatures, key distribution, a keyring of trusted (authorized) keys, and a cryptographically-signed manifest of per-file hashes, permissions, and extended file attributes? FWIW, ZIP at least does a CRC32.
We now have the Linux Foundation CNCF sigstore for any artifact, including OCI container images.
Because ld-proofs is RDF, it works in JSON-LD and you could merge the entire SBOM [1] and e.g. CodeMeta [2] Linked Data metadata for all of the standardized-metadata-documented components in a stack.
I hope the image registry abstraction opens up the possibility to improve the storage and transfer of images without breaking backwards compatibility. See https://www.cyphar.com/blog/post/20190121-ociv2-images-i-tar for why tar is not optimal.
Running containers on AWS requires uploading an image to a registry/repository, but there isn't any API to actually upload the .json manifest files and .tar.gz layer files they need, even if I have them sat in a folder in front of me (compare this to Lambda, where we can upload a .zip to S3).
The worst part is that the "official" way to upload artifacts to a repository actually requires installing the 'docker' command, running some sort of "login" command in that, piping around some secret tokens generated by the AWS CLI, etc.
The only other approach I could find is their "low level" chunk-based upload API. I wrote a Python script around that (pretty much just a 'while' loop), so I can avoid this docker sillyness.
I don't mind that registries are a thing but I hate that they're so baked into the tools, which I think is what you're getting at.
If I'm starting a project, I want everything small, simple, and self-contained within my project directory. If, right out of the gate, I'm supposed to already have a daemon running (as root it seemed for a while?) which will do the building and execution, and a "registry" set up somewhere, I've already got pieces I likely don't understand and magic commands and high level abstractions for interacting with them, and introductory material telling me "this magic command is all you have to do! look how much you don't have to worry about!"
But I am worrying about it. And that's why I love when people write these "hard way" guides. So thank you for that link.
Actually it's the process how I (DIY) deploy several static sites from my local machine. I build the Docker image (hugo sites with nginx, expose a single port for HTTP traffic) and save it as tar. Ironically the base images do come from a registry, but I can't deploy to a public registry and don't want to host my own.
On the server where I need to run that site, I just transfer the tar, load the image and run the docker image. It's so straightforward I much more prefer this way than being dependant on external registry sites for deployments.
`docker save` archives the entire (often huge) image. `docker-pushmi-pullyu` uses an ephemeral registry as an intermediary, so it only needs to transfer layers that have changed. It saves me a lot of time.
Heh - in college I took a network security class that required us to set up our own attack/defend environments, and one of the important parts of the report was to explain to the TAs how to set up our environment. We were able to save ourselves a ton of writing by just exporting docker images like this and giving instructions on how to load those onto the VMs :)
Someone else I knew made .deb packages, which might have been smarter, since there were some hosts we didn't containerize (mainly ones that handled routing and such; I know now we might have been able to get away with doing it, but at the time I didn't and I thought it might be too much hassle for an already complicated project)
It's interesting skopeo [1] hasn't popped up in this discussion, partially because it is part of redhat's container tools along with podman, and partially because although it started out as a tool to examine remote containers it too supports container migration, but not just between registries.
From the linked website "Skopeo is a tool for moving container images between different types of container storages. It allows you to copy container images between container registries like docker.io, quay.io, and your internal container registry or different types of storage on your local system". Perhaps redhat plan to roll up skopeo functionality into Podman at some point?
Anybody know of some simple/lighweight registry for local usage? Quay boasts itself as a super duper enterprisey all solution. I'm looking for something more of a 'simple http server with basic acl' solution.
I don't know if this is light weight enough but I have some experience with [Harbor](https://goharbor.io/) for our company. The ACL it presents is simple enough. Maybe it was just our implementation but it ended running a lot of components on our cluster so I can't vouch for local use.
I ended up replacing it with AWS ECR. We only have a couple of container repos so ECR only ends up costing a few dollars per month. Not local, but very easy and almost free.
I use this trick to push to servers in an unnecessarily tight network I have to deploy to sometimes that can't see my source control / container registry.
But I do it for Docker. I have overall the sense that Podman is trying to accomplish feature parity with Docker but isn't there yet. Feedback on this formulation?
There's a few places where Podman probably has to catch up with docker but conversely, I think Docker is still trying to catch up with Podman in regards to running rootless (i.e. running containers using a user account without having root privileges).
[+] [-] candiddevmike|4 years ago|reply
If you want to DIY a container with unix tools, this should help: https://containers.gitbook.io/build-containers-the-hard-way/
[+] [-] hutrdvnj|4 years ago|reply
[+] [-] westurner|4 years ago|reply
The TUF spec (and PyPI TUF PEPs) explains why a tar over https (with optional DNSSEC, a CA cert bundle, CRL, OCSP,) isn't sufficient for secure software distribution. "#ZeroTrust DevOps"; #DevSecOps
What's the favorite package format with content signatures, key distribution, a keyring of trusted (authorized) keys, and a cryptographically-signed manifest of per-file hashes, permissions, and extended file attributes? FWIW, ZIP at least does a CRC32.
We now have the Linux Foundation CNCF sigstore for any artifact, including OCI container images.
W3C ld-proofs is a newer web standard that unfortunately all package managers haven't yet migrated to. https://news.ycombinator.com/item?id=29355786
Because ld-proofs is RDF, it works in JSON-LD and you could merge the entire SBOM [1] and e.g. CodeMeta [2] Linked Data metadata for all of the standardized-metadata-documented components in a stack.
[1] https://github.com/google/osv/issues/55
[2] https://github.com/codemeta/codemeta
[+] [-] zerd|4 years ago|reply
[+] [-] chriswarbo|4 years ago|reply
Running containers on AWS requires uploading an image to a registry/repository, but there isn't any API to actually upload the .json manifest files and .tar.gz layer files they need, even if I have them sat in a folder in front of me (compare this to Lambda, where we can upload a .zip to S3).
The worst part is that the "official" way to upload artifacts to a repository actually requires installing the 'docker' command, running some sort of "login" command in that, piping around some secret tokens generated by the AWS CLI, etc.
The only other approach I could find is their "low level" chunk-based upload API. I wrote a Python script around that (pretty much just a 'while' loop), so I can avoid this docker sillyness.
https://docs.aws.amazon.com/AmazonECR/latest/userguide/getti...
[+] [-] grappler|4 years ago|reply
If I'm starting a project, I want everything small, simple, and self-contained within my project directory. If, right out of the gate, I'm supposed to already have a daemon running (as root it seemed for a while?) which will do the building and execution, and a "registry" set up somewhere, I've already got pieces I likely don't understand and magic commands and high level abstractions for interacting with them, and introductory material telling me "this magic command is all you have to do! look how much you don't have to worry about!"
But I am worrying about it. And that's why I love when people write these "hard way" guides. So thank you for that link.
[+] [-] juriansluiman|4 years ago|reply
On the server where I need to run that site, I just transfer the tar, load the image and run the docker image. It's so straightforward I much more prefer this way than being dependant on external registry sites for deployments.
[+] [-] formerly_proven|4 years ago|reply
.wh. files? S_IFWHT? Never heard of 'er! Layers as tars that need extracting? #gzipallthethings (what is that you're saying? gzip is slow?)
They did add mime types for layers, so in theory you could do something better, but that's not going to happen because it's not backwards compatible.
[+] [-] leetbulb|4 years ago|reply
docker save <image> | ssh <remote host> docker load
[+] [-] matt_kantor|4 years ago|reply
`docker save` archives the entire (often huge) image. `docker-pushmi-pullyu` uses an ephemeral registry as an intermediary, so it only needs to transfer layers that have changed. It saves me a lot of time.
[+] [-] ollien|4 years ago|reply
Someone else I knew made .deb packages, which might have been smarter, since there were some hosts we didn't containerize (mainly ones that handled routing and such; I know now we might have been able to get away with doing it, but at the time I didn't and I thought it might be too much hassle for an already complicated project)
[+] [-] realPubkey|4 years ago|reply
[+] [-] technofiend|4 years ago|reply
From the linked website "Skopeo is a tool for moving container images between different types of container storages. It allows you to copy container images between container registries like docker.io, quay.io, and your internal container registry or different types of storage on your local system". Perhaps redhat plan to roll up skopeo functionality into Podman at some point?
https://www.redhat.com/en/blog/skopeo-10-released#:~:text=Sk....
[+] [-] qbasic_forever|4 years ago|reply
[+] [-] m463|4 years ago|reply
a docker push/docker pull can skip layers that already exist.
[+] [-] muhehe|4 years ago|reply
[+] [-] briggers|4 years ago|reply
docker run -d -p 5000:5000 --name registry registry:2
https://docs.docker.com/registry/#:~:text=The%20Registry%20i....
[+] [-] gangstead|4 years ago|reply
I ended up replacing it with AWS ECR. We only have a couple of container repos so ECR only ends up costing a few dollars per month. Not local, but very easy and almost free.
[+] [-] nawgz|4 years ago|reply
But I do it for Docker. I have overall the sense that Podman is trying to accomplish feature parity with Docker but isn't there yet. Feedback on this formulation?
[+] [-] scheme271|4 years ago|reply