In Chrome, you can just do as the author says, right click and "Save Image As".
Then just go to the folder where it is being downloaded, and copy/paste the file "lisa.jpeg.crdownload" to "lisa.jpeg.crdownload copy".
Rename to "lisa.jpeg" and cancel the download. You now have the image. What's interesting is that you ARE actually downloading this image. It's just that they don't terminate the connection.
We have a security proxy at work that gives you the bits, but then holds the connection open while it does a scan, then resets the connection if it doesn't like something inside. Both Chrome and Firefox [haven't tried IE/Edge, but I assume that they'll do something that the proxy vendor would want] infer [or are told?] that the connection broke and delete the interim file. Unfortunately, with zip files, the header is at the end; so it can't do scanning until the whole file is down.
For me, the easiest way to mitigate it turned out to be to use wget [with an appropriate user-agent... say, the same as my desktop browser]. wget Gets the bits, but doesn't in any way molest the "partial" download when the connection resets. Then it tries to download the rest using the "Range" HTTP header, and the server says "oh, dude, you already got the whole thing"; wget declares success, and all the bits are in my download folder.
I believe that we pay, like, a lot for this proxy, which is annoying on two counts: 1) If I can get past it trivially, then presumably competent attackers can, too, and 2) Sometimes it takes a dislike to legitimate stuff, which is how I was forced to learn how to get around it.
I don't understand what this website is supposed to be demonstrating. Some sort of genius version of disabling right click I suppose. But I did download the image, because its contents were transferred to my computer's memory and displayed on my screen. I can see it clear as day.
If Web 3 is just willfully misunderstanding how computers work, I don't see a very bright future for it.
The problem with leaving connections open is that there's a limit on how many you can have on the server... I think the author has committed self-DoS :)
It would be possible to really close the connection but hack something to don't inform the client. (maybe just doing close() with SO_LINGER=0 and dropping outgoing RST in iptables would be enough)
When you usually try to download an image, your browser opens a connection to the server and sends a GET request asking for the image.
I'm not a web designer, but that seems rather ass-backwards. I'm already looking at the image, therefore the image is already residing either in my cache or in my RAM. Why it is downloaded a second time instead of just being copied onto my drive?
You can totally "download" the image in your RAM by right clicking / long pressing -> "copy image" or equivalent in most browsers. It's just not going to be a byte by byte identical file, and may be in a different format, e.g. you get a public.tiff on the clipboard when you copy an image from Chrome or Safari on macOS, even if the source image is an image/svg+xml.
As far as I remember from a previous project from a few years ago, the browser doesn't include a referrer for the download request, which can be used for a distinction. (You'll have to disable caching and E-Tags for this to work.)
However, this is easily defeated by the use of the console: Select the sources tab, locate the image and simply drag-and-drop the image from there, which will use the local cache instance for the source. Works also with this site, at least with Safari.
I have problem understanding what problem is this solving?
When the image is on my screen I can just screenshot it.
This is a common problem, using something in insecure environment, thats why companies are going into such extents to encrypt movies on whole train from source to the display and even those are regularly dumped.
I don't know about browser internals, but I would guess that the browser decodes the image once into a format that can be shown on the page (so from PNG/JPG/WEBP into a RGBA buffer) and then discards the original file. This saves a bit of memory in 99.99% of cases when the image is not immediately saved afterwards.
There's another way to achieve this in a more malicious way. Granted I haven't tried it in years, but it was possible back in 2017 when I tested it.
The idea is to fake the image that's being displayed in the IMG element by forcing it to show a `background-image` using `height: 0;` and `padding-top`.
In theory, you could make an IMG element show a photo of puppies and if the person chose to Right-click > Save Image As then instead of the dog photo it could be something else.
For some reason I can't Oauth into Codepen so for now I can't recreate it publicly.
You could also just do like we did for years and check the refer for the image request, and if it wasn't your web server you redirect the file to whatever you want, the end user has know what of knowing. and because the trick is done on the server side then viewing your source won't get around it.
This is the same method used to prevent hot linking to images back in the day.
Not very new, the technique's probably been around since the 2000's... e.g. you can't right click, save as on the web version of Instagram because all the images are background-images attached to DIVs. In the "old days" there'd be a 1x1 transparent GIF above the image, so any downloader would download that instead.
This does create a self inflicted Slowloris attack on the server hosting the image, so this site is probably more susceptible to the hug of death than most
It always baffled me browsers even try to download an image (or a page or whatever) I asked them to save despite fact they have already downloaded and displayed it. What I would want them to do instead is just dump it from the memory.
And this sounds particularly important in case it's about a web page which has been altered in runtime by JavaScript - I want the actual DOM dumped so I can then loaded it to display exactly what I see now.
Add
-N, --no-buffer
Disables the buffering of the output stream. In normal work situations, curl will use a standard buffered output stream that will have the effect that it will output the
data in chunks, not necessarily exactly when the data arrives. Using this option will disable that buffering.
Same for me, but the webpage gave the impression that it was still downloading, because after it download completely, at least in firefox on iPhone, it’s still showing that it was downloading.
This is a perfect (if maybe unintentional) example of how to get help from otherwise disinterested technical folk: Make an obviously technically-incorrect claim as fact, and watch as an entire army comes out of the woodwork giving you technical evaluations :)
I’m aware of this phenomenon, but have never tested it (confidently posting something incorrect to get responses with the real answer). Has anyone here actually tried this? How did it work?
In Chromium based browsers the quickest method I've found is "right click -> Inspect" the image then click the sources tab in the dev tools window. From here you can drag or save the image shown without issue. My guess as to why this works is the sources view seems to pull from the loaded content of the page rather than fetch the content based on the lack of packets trying this with a packet capture running.
In Firefox, beside that, you can press Ctrl + I, open the "Media" tab, and pick any of the graphics that were already downloaded to display the page. Then you can save the picture(s) you're interested in. I suppose the source of it is the local cache.
Does not work in this particular case, of course, because the whole image is not yet in the cache.
Great! Just what we need these days: more tricks to screw around with the simple, straightforward implementation of the HTTP protocol! And just in time for Christmas.
I thought this is what it was going to be! Another method would be to generate a plane with the same number of vertices as pixels, store the pixel color values as an attribute, and then render the mesh to a canvas.
This sure seems like a weakness of the so-called "modern" web browser. Simpler, safer clients and proxies have no trouble dealing with a server that is (deliberately) too slow.
On Google Pixel there is a new feature where I can go to the recent app screen and it defects images to click on them to do Google lense or save images or share image. I was able to save the image of size 506kb with 841x1252 1.1MP pic.
Works fine with wget it just keeps hanging but if you CTRL+C it and open the file it'll look fine.
The trick is to have nginx never timeout and just indefinitely hang after the image is sent. The browser renders whatever image data it has received as soon as possible even though the request is never finished. However, when saving the image the browser never finalizes writing to the temp file so it thinks there is more data coming and never renames the temp file to the final file name.
My usual way of downloading images is to click and drag the image into my downloads folder on my Mac. Worked fine for me from Safari. Am I missing something?
Aside from all the folks who can download the image one way or another, I'm pretty disappointed that the technique here is simply using a web-server that doesn't work like clients expect. People have broken links or incorrect redirects all the time, but we don't generally make a fuss over them.
Yeah, I couldn't figure what the fuzz is about at first, as I simply right-clicked, copied and pasted into mspaint. I rarely need to save an image, more often than not I just paste it into some other application.
An interesting workaround for Android 12 users: go to the app switcher and there will be a badge over the image which you can click to get "copy", "share" and "save" buttons. Save it from that panel and it works just fine.
No one seems to mention that Chrome keeps spinning on the HTML load as well and eventually kills the image. This means the webpage itself is broken and fails to work. Not just the download. Soo.. this just does not work for anything..
This is basically a carefully targeted reverse slow lorris and involves right clicking an image why do I fear that use case and that level of madcap solution will all lead back to NFT bros...
This one is pretty easy but a friend recently showed me one (gallery of some sort) I couldn't figure out quickly which was downloading chunks in nonstandard ways and piecing them together with uglified js.
Somehow right clicking + saving worked fine on Safari (desktop). I tried it a couple of times and it worked in all cases; sometimes it took a second, sometimes more. Perhaps the server dropped the connection?
On webkit based browsers at least you can just drag the image out, it doesn’t bother trying to redownload it just reconstructs the image file from memory, this also applies to copy/paste on ios
There's a multitude of ways to workaround this hack. You can easily grab the screen area via the OS if need be. Seems pointless to try to restrict access if it's viewable in a browser.
what, sure if initiating the save as.. triggers this endless download thing
but the initial load is the image and opening up dev tools and finding it in the sources/cache and saving it from there, chrome knows it's 56.1kb or whatever and just saves it out of cache, done.
Interesting but what was the point they're trying to make?
Did you even try this before posting? These steps are no different than just right-clicking the image and choosing "Save image as". It still results in a download that never finishes.
I posted the same snarky comment too. Seems the headline should be “You can’t download this exact image, but you can copy the presentation image via other means.”
More of a play on words for how copy and download often times mean the same thing even though technically they’re different.
unfocused|4 years ago
Then just go to the folder where it is being downloaded, and copy/paste the file "lisa.jpeg.crdownload" to "lisa.jpeg.crdownload copy".
Rename to "lisa.jpeg" and cancel the download. You now have the image. What's interesting is that you ARE actually downloading this image. It's just that they don't terminate the connection.
chunkyks|4 years ago
For me, the easiest way to mitigate it turned out to be to use wget [with an appropriate user-agent... say, the same as my desktop browser]. wget Gets the bits, but doesn't in any way molest the "partial" download when the connection resets. Then it tries to download the rest using the "Range" HTTP header, and the server says "oh, dude, you already got the whole thing"; wget declares success, and all the bits are in my download folder.
I believe that we pay, like, a lot for this proxy, which is annoying on two counts: 1) If I can get past it trivially, then presumably competent attackers can, too, and 2) Sometimes it takes a dislike to legitimate stuff, which is how I was forced to learn how to get around it.
alsetmusic|4 years ago
echlebek|4 years ago
If Web 3 is just willfully misunderstanding how computers work, I don't see a very bright future for it.
Fnoord|4 years ago
julieturner99|4 years ago
LastMuel|4 years ago
millzlane|4 years ago
nilslindemann|4 years ago
kuroguro|4 years ago
https://en.wikipedia.org/wiki/Slowloris_(computer_security)
tomxor|4 years ago
Now I really can't download the image
sildur|4 years ago
garaetjjte|4 years ago
titaniczero|4 years ago
ReactiveJelly|4 years ago
causi|4 years ago
I'm not a web designer, but that seems rather ass-backwards. I'm already looking at the image, therefore the image is already residing either in my cache or in my RAM. Why it is downloaded a second time instead of just being copied onto my drive?
Tuna-Fish|4 years ago
The format allows for showing images when they are partially downloaded, and also allows pushing data that doesn't actually change the image.
oefrha|4 years ago
masswerk|4 years ago
However, this is easily defeated by the use of the console: Select the sources tab, locate the image and simply drag-and-drop the image from there, which will use the local cache instance for the source. Works also with this site, at least with Safari.
folmar|4 years ago
I can't vouch for chromium-*, but my Firefox does NOT do that. I've just tested it.
stiray|4 years ago
When the image is on my screen I can just screenshot it.
This is a common problem, using something in insecure environment, thats why companies are going into such extents to encrypt movies on whole train from source to the display and even those are regularly dumped.
paavohtl|4 years ago
forgotmypw17|4 years ago
Your guess is as good as mine as to why.
ravenstine|4 years ago
The idea is to fake the image that's being displayed in the IMG element by forcing it to show a `background-image` using `height: 0;` and `padding-top`.
In theory, you could make an IMG element show a photo of puppies and if the person chose to Right-click > Save Image As then instead of the dog photo it could be something else.
For some reason I can't Oauth into Codepen so for now I can't recreate it publicly.
jffry|4 years ago
Meph504|4 years ago
This is the same method used to prevent hot linking to images back in the day.
bellyfullofbac|4 years ago
alias-dev|4 years ago
qwerty456127|4 years ago
And this sounds particularly important in case it's about a web page which has been altered in runtime by JavaScript - I want the actual DOM dumped so I can then loaded it to display exactly what I see now.
agucova|4 years ago
dobladov|4 years ago
styluss|4 years ago
and it works
shrx|4 years ago
pbobak|4 years ago
jb1991|4 years ago
threatripper|4 years ago
robarr|4 years ago
progman32|4 years ago
Andrew_nenakhov|4 years ago
[1]: https://meta.m.wikimedia.org/wiki/Cunningham%27s_Law
userbinator|4 years ago
manbart|4 years ago
unknown|4 years ago
[deleted]
barelysapient|4 years ago
singularity2001|4 years ago
andix|4 years ago
aerovistae|4 years ago
kingcharles|4 years ago
soheil|4 years ago
cagr|4 years ago
Hard_Space|4 years ago
teitoklien|4 years ago
That’s the joke, i guess.
jancsika|4 years ago
sam1r|4 years ago
Because github is currently down.
jcun4128|4 years ago
zamadatix|4 years ago
nine_k|4 years ago
Does not work in this particular case, of course, because the whole image is not yet in the cache.
Kuinox|4 years ago
wsinks|4 years ago
I now have a photo of the Mona Lisa in my camera roll.
I guess this is one of those things that wouldn’t be as edgy with the actual mechanism stated. :)
human|4 years ago
eyelidlessness|4 years ago
TheRealDunkirk|4 years ago
sumthinprofound|4 years ago
(edit: clarity)
dvh|4 years ago
dorkwood|4 years ago
MildlySerious|4 years ago
masswerk|4 years ago
dibeneditto|4 years ago
LeoPanthera|4 years ago
numbsafari|4 years ago
1) used the “copy image” function Safari on iOS.
2) took a screenshot.
… back to the drawing board NFT bros.
tomashubelbauer|4 years ago
1vuio0pswjnm7|4 years ago
For example,
curl
tnftp links haproxy1vuio0pswjnm7|4 years ago
Supposedly|4 years ago
Works for me :) (I pasted in Telegram FYI)
unknown|4 years ago
[deleted]
unknown|4 years ago
[deleted]
earth2mars|4 years ago
nicebill8|4 years ago
zImPatrick|4 years ago
unfocused|4 years ago
synergyS|4 years ago
T0Bi|4 years ago
lavp|4 years ago
soheil|4 years ago
The trick is to have nginx never timeout and just indefinitely hang after the image is sent. The browser renders whatever image data it has received as soon as possible even though the request is never finished. However, when saving the image the browser never finalizes writing to the temp file so it thinks there is more data coming and never renames the temp file to the final file name.
busymom0|4 years ago
efortis|4 years ago
xdrosenheim|4 years ago
kuu|4 years ago
CyberShadow|4 years ago
aeturnum|4 years ago
Wowfunhappy|4 years ago
1. Secondary click image → "Copy Image"
2. Open Preview
3. File → New from Clipboard
4. Save image
fart32|4 years ago
unknown|4 years ago
[deleted]
shawnz|4 years ago
unknown|4 years ago
[deleted]
Brushfire|4 years ago
dillondoyle|4 years ago
But I don't know how to get it to not appear in network sources.
Or wasm but I don't know how to write that.
brodock|4 years ago
cmaggiulli|4 years ago
tschesnok|4 years ago
unknown|4 years ago
[deleted]
jasmes|4 years ago
Looks at prntscrn key.
This is basically a carefully targeted reverse slow lorris and involves right clicking an image why do I fear that use case and that level of madcap solution will all lead back to NFT bros...
countmora|4 years ago
Tenoke|4 years ago
thih9|4 years ago
olliej|4 years ago
sys_64738|4 years ago
ReleaseCandidat|4 years ago
veaxvoid|4 years ago
mark_and_sweep|4 years ago
sneedenheimer|4 years ago
chrisbrandow|4 years ago
Is there some reason why that's an uninteresting exception?
nfgrep|4 years ago
jaxelr|4 years ago
brundolf|4 years ago
webpdfpro|4 years ago
unknown|4 years ago
[deleted]
rad_gruchalski|4 years ago
RolloTom|4 years ago
hoseja|4 years ago
ChrisArchitect|4 years ago
but the initial load is the image and opening up dev tools and finding it in the sources/cache and saving it from there, chrome knows it's 56.1kb or whatever and just saves it out of cache, done.
Interesting but what was the point they're trying to make?
growt|4 years ago
hollander|4 years ago
syamkumar|4 years ago
fumblebee|4 years ago
can16358p|4 years ago
meow_mix|4 years ago
1. Open Inspect (right click and hit "inspect")
2. Click the "Network" tab
3. Refresh the page (while clearing the cache Command+Shift+R)
4. Right click on "lisa.jpg" in the list view under the "Network" tab
5. Click "Open in new tab"
6. Right click the image on the new tab
7. Click "Save image as"
Man I can't believe these clowns (or myself for typing all this out--don't know who is worse)
Mogzol|4 years ago
scoopertrooper|4 years ago
hoten|4 years ago
grawprog|4 years ago
marcelotournier|4 years ago
hereme888|4 years ago
SilentM68|4 years ago
unknown|4 years ago
[deleted]
unknown|4 years ago
[deleted]
worldofmatthew|4 years ago
atum47|4 years ago
0xhh|4 years ago
boublepop|4 years ago
quickthrower2|4 years ago
ladino|4 years ago
haunter|4 years ago
What am I missing?
wsinks|4 years ago
More of a play on words for how copy and download often times mean the same thing even though technically they’re different.
neximo64|4 years ago
unknown|4 years ago
[deleted]
html5web|4 years ago
zeeshanejaz|4 years ago
donkarma|4 years ago
peanut_worm|4 years ago
stevespang|4 years ago
[deleted]
daedlanth|4 years ago
[deleted]
huhtenberg|4 years ago
mmmeff|4 years ago
[deleted]
unknown|4 years ago
[deleted]