Tips and tricks for improving your cooming experience. This page is meant to house information such as user-scripts, tips and tricks on using software found on the other pages, and other suggestions for improving software efficiency or showing little known features.
Reminder to always take caution when entering scripts that you do not understand. We ask familiar users to peer review these scripts as well for malicious intent, in which case edit and remove. When submitting, please add a short description of what your script does, and how to run it.
Solution source 1.1, Solution source 1.2
Someone had asked how to download this example video, to learn how to download from this website.
Part of the problem was that the website was using a tool called "devtools-detector" to prevent users from observing page contents using inspect element. Someone in a later thread eventually proposed a solution:
You can block
http://player.perverzija.com/player/assets/devtools-detector/devtools-detector.js
in ublock so you can debug it. This site uses its own CDN to host older videos like that one but streamtape for newer ones which is trivial to download from.
import sys
import requests
from urllib.parse import urlparse
from bs4 import BeautifulSoup
def main(base_url: str):
req = requests.get(base_url)
soup = BeautifulSoup(req.text, 'html.parser')
frame = soup.find('iframe')
player_url = frame.attrs['src']
title = soup.find('h1').text
key = urlparse(player_url).query.split('=')[1]
headers = {'Referer': player_url}
req = requests.get(f'https://player.perverzija.com/cdn/hls/{key}/master.txt?s=1&d=', headers=headers)
_ = req.text.splitlines()[-1]
req = requests.get(_, headers=headers)
with open(f'{title}.m3u8', 'w') as f:
f.write(req.text)
print(title)
if __name__ == '__main__':
main(sys.argv[1])
That'll download the HLS playlist and print the title so you can pipe it like
python perverzija.py https://tube.perverzija.com/vixen-stacy-cruz-and-emily-willis-a-much-needed-break/ | % { ffmpeg -http_persistent 0 -protocol_whitelist file,http,https,tcp,tls -i "$($_).m3u8" -c copy "$($_).mp4" }
The JAV family of shell functions and scripts streams or downloads JAV videos from javhdporn.net from the command line. The original JAV script was a shell function posted by an anon in /jp/jav (Found directly below).
Just enter the desired JAV ID number, which can be found on sites such as JAV Library and R18/DMM.
jav () {
if [ -z "$1" ] ; then
read name
else
name="$1"
fi
links="$(curl -s "$(curl -s "https://www2.javhdporn.net/video/$name" | grep "embedURL" | grep -o "{.*}" | jq '.["@graph"]' | jq -r '.[].embedURL' | sed '/^null$/d' | sed 's/\/v\//\/api\/source\//')" --data-raw 'r=&d=javmvp.com' | jq -r '.data[] | select(.label | contains("720p", "480p","360p")).file' | tail -n1)"
mpv "$links"
}
Many derivatives have been made by anons in /g/cumg since then.
For downloading, refer to Piracy and Miscellaneous/Downloaders wiki pages for tools and sites.
A Powershell script for a site-rip download of eraudica.com was posted in /g/cumg. (A rewrite of this in POSIX sh would be appreciated). To run on *nix, download the Powershell shell (pwsh), set -x on this script, and just run it. This script downloads as such: /Title/Eraudica - Title.mp3
Function Scrape-Page{
Param ($url)
$links = ($url.links | Where-Object {$_.href -match "/e/eve/(\d{4})"})
$links | foreach-object -parallel {
$u = "https://eraudica.com$($_.href)"
$page = (iwr -usebasicparsing "$u")
$matches = ($page.content | sls 'var title = "(.*)";').matches
if ($matches -ne $null) {
$title = [regex]::unescape(($page.content | sls 'var title = "(.*)";').matches.groups[1].value)
write-host "Fetching $title"
$asset = ((($page.content | sls "audioInfo = (.*);").matches.groups[1].value) | convertfrom-json).mediaurl
$dir = $title.Split([IO.Path]::GetInvalidFileNameChars()) -join '_'
if (!(test-path -path $dir)) {
$obj = (new-item -path "$dir" -itemtype directory)
$realname = $obj.Name
$outpath = [IO.Path]::Combine("$($realname)", "Eraudica - $($realname).mp3")
iwr -usebasicparsing $asset -OutFile $outpath -skiphttperrorcheck
}
}
} -throttlelimit 10
}
$base = "https://eraudica.com/"
$root = iwr -usebasicparsing $base
$pages = [int](($root.content | sls 'Page 1 of (\d+)').matches.groups[1].value)
Scrape-Page($root)
for ($i = 1; $i -lt $pages; $i++) {
$url = "$($base)e/eve?page=$i"
write-host "Scraping $url"
$root = iwr -usebasicparsing $url
Scrape-Page($root)
}
You can use the browser addon "Imagus" to expand images on hover, which allows you to view them as normal, despite right click being disabled on the webpage itself.
You can also use the browser addon "Image Max URL" to expand images to their definitive original-sized version. It has an option to automatically redirect to the original-sized image if you so chose. It even works for videos.
GitHub page, where it's available as a userscript for most browsers
Solution 1.1 (not advisable to use due to lag), solution 1.2
Someone had asked how to download this example image (now expired), to learn how to download from this website.
The website had disabled "ctrl + S" ("Save As"), right clicking, and while not immediately relevant, had also set the image to expire after a set amount of hours.
There were two solutions posted:
https://archived.moe/g/thread/66275371/#66289687
Press F12, go to console, paste this
window.location.href = document.querySelector('canvas').toDataURL('image/png');
and press enter then right click save it.
https://archived.moe/g/thread/66275371/#66289955
firefox doesn't work well with data uris in it's url bar.
you can try
(function() {
const canvasImage = document.querySelector('canvas').toDataURL('image/png');
document.body.innerHTML = `<img src="${canvasImage}">`;
})();
instead which will just display the image on the current page but lets you right click and save it.
That should make it not freeze.
>Would this work for sites with dozens of images on the same page?
It depends on the page and how they display the images.
Note: This is not a section to post IPFS shares, but simply spread awareness that Hydrus supports it.
Hydrus Network supports IPFS. IPFS is a p2p protocol that makes it easy to share many sorts of data. The Hydrus client can communicate with an IPFS daemon to send and receive files.
Currently coom.tech host does not allow file uploads, so I provide a link. Please edit this page to embed the file directly if the host changes this in the future.
To import Downloaders with Hydrus Network,open Hydrus Network, click on the Network tab, click downloaders, then click import downloaders. A photo of Lain should pop up, save the downloader file, then drag it into the photo of lain.
Site: https://ofans.party/#/
Note: ofans.party has died (although the content has been saved I believe), however a site is being planned as a successor by the host of kemono.party. ofans.party Downloader: https://8chan.moe/.media/064149085f868b9778a6302b311d54c34a1c9ea781e5d91d85d3b5157f29fd59.png
Note: This downloader only works with gallery or subscription mode and if you're using the main site, Hydrus won't be able to recognize the URL.
Also note: This downloads from IPFS. What this means is that the content is distributed P2P, but I'm setting a gateway in the parser for compatibility (so you don't have to host your own node), specifically https://ipfs.io/ipfs/. Sometimes, the gateway won't have the content right away and may return a 504 because it did not get your content fast enough; try it again and it'll work eventually.
You can also choose another gateway, but you'll have to enter it manually in the parser, so I wouldn't recommend it. If you still decide to look into it, a list is available at https://ipfs.github.io/public-gateway-checker/.
https://github.com/mikf/gallery-dl
Configuration files for gallery-dl use a JSON-based file format.
For a (more or less) complete example with options set to their default values, see gallery-dl.conf.
For a configuration file example with more involved settings and options, see gallery-dl-example.conf.
A list of all available configuration options and their descriptions can be found in configuration.rst.
gallery-dl searches for configuration files in the following places:
Windows:
(%USERPROFILE% usually refers to the user's home directory, i.e.
C:\Users\
Linux, macOS, etc.:
Values in later configuration files will override previous ones.
Command line options will override all related settings in the configuration file(s), e.g. using --write-metadata will enable writing metadata using the default values for all postprocessors.metadata.* settings, overriding any specific settings in configuration files.
"artstation":
{
"external": false,
"directory": ["artstation", "{userinfo[username]}"],
"filename": "{userinfo[username]}_{hash_id}_{asset[id]}_{title}_{date}.{extension}"
},
example url:
https://www.artstation.com/artwork/QrrARd
default:
artstation_7744526_28467182_Kama
template:
ucupumar_QrrARd_28467182_Kama_2020-07-10 17_31_54
"deviantart":
{
"include": "gallery,scraps",
"refresh-token": "cache",
"client-id": "placeholder",
"client-secret": "placeholder",
"flat": true,
"folders": false,
"journals": "html",
"mature": true,
"metadata": true,
"original": true,
"quality": 100,
"extra": true,
"wait-min": 0,
"cookies": "C:\\Users\\User\\cookiesda.txt",
"cookies-update": true,
"directory": ["deviantart", "{author[username]}"],
"filename": "{author[username]}_{index}_{title}_{date}.{extension}"
},
example url:
https://www.deviantart.com/personalami/art/Valicia-868721085
default:
deviantart_868721085_Valicia
template:
PersonalAmi_868721085_Valicia_2021-01-30 05_20_24
"mastodon":
{
"mastodon.xyz":
{
"access-token": "cab65529..."
},
"tabletop.social": {
"access-token": "513a36c6..."
},
"directory": ["mastodon", "{instance}", "{account[username]!l}"],
"filename": "{category}_{account[username]}_{id}_{media[id]}_{date}.{extension}"
},
example url:
https://baraag.net/@orenjipiiru/104419352335505520
default:
baraag_104419352335505520_10254929
template:
baraag_orenjipiiru_104419352335505520_10254929_2020-06-28 02_54_31
"newgrounds":
{
"postprocessors": [{
"name": "metadata",
"directory": "metadata"
}],
"directory": ["newgrounds", "{user}"],
"filename": "{user}_{index}_{title}_{date}{num:?_//}.{extension}"
},
example url:
https://www.newgrounds.com/art/view/sailoryon/yon-dream-buster
default:
newgrounds_1438673_Yon Dream Buster!
newgrounds_1438673_01_Yon Dream Buster!
template:
sailoryon_1438673_Yon Dream Buster!_2020-09-25 18_22_52
sailoryon_1438673_Yon Dream Buster!_2020-09-25 18_22_52_1
"nijie":
{
"cookies": "C:\\Users\\User\\cookiesnj.txt",
"cookies-update": true,
"username": null,
"password": null,
"directory": ["nijie", "{artist_id}"],
"filename": "{artist_id}_{image_id}_{date}_{num}.{extension}"
},
example url:
https://nijie.info/view.php?id=162282
default:
162282_p0
template:
735_162282_Wed 16 Mar 2016 10_09_46 AM JST+0900_0
"patreon":
{
"directory": ["patreon", "{creator[vanity]}"],
"filename": "{creator[vanity]}_{id}_{title}_{filename}_{date}_{num}.{extension}"
},
example url:
https://www.patreon.com/posts/38128497
default:
38128497_June Print WIPs!_01
template:
fetalstar_38128497_June Print WIPs!_patreon-promoprint-wip3_2020-06-11 19_02_00_1
"piczel":
{
"directory": ["piczel", "{user[username]}"],
"filename": "{user[username]}_{id}_{title}_{date}_{num}.{extension}"
},
example url:
https://piczel.tv/gallery/image/25048
default:
piczel_25048_Hats_00
template:
GCFMug_25048_Hats_2020-02-18 05_48_01_0
"pillowfort":
{
"directory": ["pillowfort", "{username}"],
"filename": "{username}_{post_id}_{id}{title:?_//}{filename:?_//}_{date}.{extension}"
},
example url:
https://www.pillowfort.social/posts/1501710
default:
1501710 (sketches) Holo Cosplays Revy 01
1501710 (sketches) Holo Cosplays Revy 02
template:
Seraziel_Art_1501710_1040212_(sketches) Holo Cosplays Revy_Bonus Sketch 1_2020-07-01 03_26_08
Seraziel_Art_1501710_1040213_(sketches) Holo Cosplays Revy_bonus sketch 2_2020-07-01 03_26_08
"seiga":
{
"cookies": "C:\\Users\\User\\cookiessg.txt",
"cookies-update": true,
"username": null,
"password": null,
"directory": ["seiga", "{user[id]}"],
"filename": "{user[id]}_{image_id}{date:?_//}.{extension}"
},
example url:
https://seiga.nicovideo.jp/seiga/im10635055
default:
seiga_10635055
template:
51170288_10635055_2020-11-04 03_37_00
"twitter":
{
"replies": true,
"retweets": false,
"twitpic": false,
"videos": true,
"cookies": "C:\\Users\\User\\cookiestw.txt",
"cookies-update": true,
"directory": ["twitter", "{user[name]}"],
"filename": "{user[name]}_{tweet_id}_{date}_{num}.{extension}"
},
example url:
https://twitter.com/Himazin88/status/1353633551837589505
default:
1353633551837589505_1
template:
Himazin88_1353633551837589505_2021-01-25 09_19_22_1
"weasyl":
{
"directory": ["weasyl", "{owner_login}"],
"filename": "{owner_login}_{submitid}_{title}_{date}.{extension}",
"api-key": "placeholder"
},
example url:
https://www.weasyl.com/~fluffkevlar/submissions/1622631/ink-eyes
default:
1622631 Ink-Eyes
template:
fluffkevlar_1622631_Ink-Eyes_2018-04-13 01_37_40
Below are filename structures for sites where I personally found that just a "filename" filename was useful enough for me:
"furaffinity":
{
"postprocessors": [{
"name": "metadata",
"directory": "metadata"
}],
"descriptions": "html",
"include": "gallery,scraps",
"directory": ["furaffinity", "{user}"],
"filename": "{filename}.{extension}",
"cookies": "C:\\Users\\User\\cookiesfa.txt",
"cookies-update": true
},
example url:
https://www.furaffinity.net/view/12761971/
default:
12761971 Hearth Stone
template:
1392572291.amadnomoto_jaina
"hentaifoundry":
{
"directory": ["hentaifoundry", "{user}"],
"filename": "{filename}.{extension}"
},
example url:
https://www.hentai-foundry.com/pictures/user/noise/807617/Felicia20200517
default:
hentaifoundry_807617_Felicia20200517
template:
noise-807617-Felicia20200517
Using tmsu, files download with gallery-dl can be tagged like hydrus tags its downloads. Place the following bash script in your path. I recommend ~/.local/bin/gdl-tag.
#!/bin/bash
function get_tags () {
json=$1
# "part 1 'part 2
query=".tags_$2"'//""|split(" ")[]|select(length > 0)'
local -n tags=$3
for tag in $(jq -r "$query" "$json") ; do
tags+=($tag)
done
}
function add_tag_type () {
type=$1
local -n tags=$2
i=0
for value in ${tags[@]} ; do
tags[$i]="$type=$value"
i=$((i + 1))
done
}
image="$1"
json="$image.json"
shift
rating=$(jq -r .rating "$json")
tags_artist=()
get_tags "$json" artist tags_artist
add_tag_type creator tags_artist
tags_character=()
get_tags "$json" character tags_character
add_tag_type character tags_character
tags_copyright=()
get_tags "$json" copyright tags_copyright
add_tag_type series tags_copyright
tags_metadata=()
get_tags "$json" metadata tags_metadata
add_tag_type meta tags_metadata
tags_general=()
get_tags "$json" general tags_general
tags=()
if [ "$rating" != "null" ] ; then
tags+=("rating=$rating")
fi
tags+=(${tags_artist[@]})
tags+=(${tags_character[@]})
tags+=(${tags_copyright[@]})
tags+=(${tags_metadata[@]})
tags+=(${tags_general[@]})
tmsu tag "$image" ${tags[@]}
Your ~/.config/gallery-dl/config.json should also set extractor.tags to true in addition to whatever else you have. The bare minimum config.json looks like this:
{
"extractor": {
"tags": true
}
}
To test it out, try this. This will download 2 images and tag them using the gdl-tag script.
gallery-dl 'https://gelbooru.com/index.php?page=post&s=list&tags=tomari_%28veryberry00%29+shishiro_botan+' --write-metadata --exec 'gdl-tag {}' --range '1-2' --no-skip
Ugoira are downloaded as lossy-encoded WEBMs from danbooru by default. To save the original, add the following to your gallery-dl config:
{
"extractor": {
"danbooru": {
"ugoira": true
}
}
}
To save ugoira works in a playable format (losslessly), use --ugoira-conv-copy or add the following postprocessor to your gallery-dl config:
{
"extractor": {
"postprocessors": [
{
"name": "ugoira",
"extension": "mkv",
"ffmpeg-args": ["-c", "copy", "-nostdin", "-y"],
"ffmpeg-demuxer": "mkvmerge",
"ffmpeg-output": false,
"repeat-last-frame": false,
"whitelist": ["pixiv", "danbooru"]
}
]
}
}
Note: --ugoira-conv-copy and the above postprocessor will delete the original zips. If this is important to you, add the below to your postprocessor config.
"keep-files": "true"
"mpv is a free (as in freedom) media player for the command line. It supports a wide variety of media file formats, audio and video codecs, and subtitle types."
You can "shift + right click" the folder mpv is in to gain access to the right click option "Open command window here". You can then run the command "mpv [link]" to stream any link in mpv instead of your browser.
https://github.com/ytdl-org/youtube-dl/#description
"youtube-dl is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like"
youtube-dl -i --cookies youtube-dl_cookies.txt -o "C:\Users\User\youtube-dl output\%(title)s-%(id)s\%(title)s-%(id)s.%(ext)s" --write-description --write-info-json --write-annotations --write-sub --write-thumbnail https://www.youtube.com/watch?v=7E-cwdnsiow
run the following command:
youtube-dl --rm-cache-dir