This seems like a good time for a PSA:
If in the future you see something on a public-facing webpage you want to make a durable record of for use as evidence, don't take a screenshot. Those are -- understandably -- widely considered too easy to fabricate.
Instead, snapshot the page with the Internet Archive. It'll log a timestamped copy of the page to their servers. Highly tamper-resistant.
https://archive.org/web/ ("save page now", bottom-right)
Yeah though, it's an understandable limitation that this only works for public-facing pages. I don't have a good solution offhand for non-public locales.
Good news—Mastodon actually does a little bit of fancy footwork to make it so that you *can* copy the URL even though it looks like you can't—right clicking "copy link" should still work. Here's the page explaining the HTML/CSS wizardry that makes that possible: https://github.com/tootsuite/documentation/blob/master/Using-the-API/Tips-for-app-developers.md#links
@sir Not going forward.
"...A few months ago we stopped referring to robots.txt files on U.S. government and military web sites for both crawling and displaying web pages (though we respond to removal requests sent to email@example.com). As we have moved towards broader access it has not caused problems, which we take as a good sign. We are now looking to do this more broadly...."
@starkatt I'd also like to note that perma.cc is an excellent tool used by law libraries in this regard. (And I perma.cc all links in my papers because bitrot is a thing.)
@starkatt Selling points: 1) A 9 character URL, easily typeable from dead-tree. 2) It screenshots the page, so can archive *google docs* and other esoteric stuff that archive breaks on.
Downsides: it is limited to 10/mo unless your library subscribes.
@starkatt Counter PSA is that pages can also ask to NOT be a part of the archive :|
So do both.
The Vulpine Club is a friendly and welcoming community of foxes and their associates, friends, and fans! =^^=