Tips and tricks for improving your cooming experience. This page is meant to house information such as user-scripts, tips and tricks on using software found on the other pages, and other suggestions for improving software efficiency or showing little known features.

Scripts

Reminder to always take caution when entering scripts that you do not understand. We ask familiar users to peer review these scripts as well for malicious intent, in which case edit and remove. When submitting, please add a short description of what your script does, and how to run it.

Shell Scripts

Download Videos From Sites That Disable Downloading

Original help request

Solution source 1.1, Solution source 1.2

Someone had asked how to download this example video, to learn how to download from this website.

Part of the problem was that the website was using a tool called "devtools-detector" to prevent users from observing page contents using inspect element. Someone in a later thread eventually proposed a solution:

You can block

http://player.perverzija.com/player/assets/devtools-detector/devtools-detector.js

in ublock so you can debug it. This site uses its own CDN to host older videos like that one but streamtape for newer ones which is trivial to download from.

import sys
import requests
from urllib.parse import urlparse
from bs4 import BeautifulSoup

def main(base_url: str):
 req = requests.get(base_url)
 soup = BeautifulSoup(req.text, 'html.parser')
 frame = soup.find('iframe')
 player_url = frame.attrs['src']
 title = soup.find('h1').text
 key = urlparse(player_url).query.split('=')[1]
 headers = {'Referer': player_url}
 req = requests.get(f'https://player.perverzija.com/cdn/hls/{key}/master.txt?s=1&d=', headers=headers)
 _ = req.text.splitlines()[-1]
 req = requests.get(_, headers=headers)
 with open(f'{title}.m3u8', 'w') as f:
 f.write(req.text)
 print(title)


if __name__ == '__main__':
 main(sys.argv[1])

That'll download the HLS playlist and print the title so you can pipe it like

python perverzija.py https://tube.perverzija.com/vixen-stacy-cruz-and-emily-willis-a-much-needed-break/ | % { ffmpeg -http_persistent 0 -protocol_whitelist file,http,https,tcp,tls -i "$($_).m3u8" -c copy "$($_).mp4" }

JAV

Stream

The JAV family of shell functions and scripts streams or downloads JAV videos from javhdporn.net from the command line. The original JAV script was a shell function posted by an anon in /jp/jav (Found directly below).

Just enter the desired JAV ID number, which can be found on sites such as JAV Library and R18/DMM.

jav () {
  if [ -z "$1" ] ; then
    read name
  else
    name="$1"
  fi
  links="$(curl -s "$(curl -s "https://www2.javhdporn.net/video/$name" | grep "embedURL" | grep -o "{.*}" | jq '.["@graph"]' | jq -r '.[].embedURL' | sed '/^null$/d' | sed 's/\/v\//\/api\/source\//')" --data-raw 'r=&d=javmvp.com' | jq -r '.data[] | select(.label | contains("720p", "480p","360p")).file' | tail -n1)"
  mpv "$links"
}
Derivatives

Many derivatives have been made by anons in /g/cumg since then.

Download

For downloading, refer to Piracy and Miscellaneous/Downloaders wiki pages for tools and sites.

Erotic Audio

Download

Eraudica

A Powershell script for a site-rip download of eraudica.com was posted in /g/cumg. (A rewrite of this in POSIX sh would be appreciated). To run on *nix, download the Powershell shell (pwsh), set -x on this script, and just run it. This script downloads as such: /Title/Eraudica - Title.mp3

Function Scrape-Page{
Param ($url)
  $links = ($url.links | Where-Object {$_.href -match "/e/eve/(\d{4})"})
  $links | foreach-object -parallel {
    $u = "https://eraudica.com$($_.href)"
    $page = (iwr -usebasicparsing "$u")
    $matches = ($page.content | sls 'var title = "(.*)";').matches
    if ($matches -ne $null) {
      $title = [regex]::unescape(($page.content | sls 'var title = "(.*)";').matches.groups[1].value)
      write-host "Fetching $title"
      $asset = ((($page.content | sls "audioInfo = (.*);").matches.groups[1].value) | convertfrom-json).mediaurl
      $dir = $title.Split([IO.Path]::GetInvalidFileNameChars()) -join '_'

      if (!(test-path -path $dir)) {
        $obj = (new-item -path "$dir" -itemtype directory)
        $realname = $obj.Name
        $outpath = [IO.Path]::Combine("$($realname)", "Eraudica - $($realname).mp3")
        iwr -usebasicparsing $asset -OutFile $outpath -skiphttperrorcheck
      }
    }
  } -throttlelimit 10
}

$base = "https://eraudica.com/"
$root = iwr -usebasicparsing $base
$pages = [int](($root.content | sls 'Page 1 of (\d+)').matches.groups[1].value)

Scrape-Page($root)

for ($i = 1; $i -lt $pages; $i++) {
  $url = "$($base)e/eve?page=$i"
  write-host "Scraping $url"
  $root = iwr -usebasicparsing $url
  Scrape-Page($root)
}

Software

Browsers

Non-browser specific

Download Images From Sites That Disable Downloading

Instagram

You can use the browser addon "Imagus" to expand images on hover, which allows you to view them as normal, despite right click being disabled on the webpage itself.

Firefox addon link

Chrome addon link

You can also use the browser addon "Image Max URL" to expand images to their definitive original-sized version. It has an option to automatically redirect to the original-sized image if you so chose. It even works for videos.

Firefox addon link

GitHub page, where it's available as a userscript for most browsers

Unsee

Original help request

Solution 1.1 (not advisable to use due to lag), solution 1.2

Someone had asked how to download this example image (now expired), to learn how to download from this website.

The website had disabled "ctrl + S" ("Save As"), right clicking, and while not immediately relevant, had also set the image to expire after a set amount of hours.

There were two solutions posted:

https://archived.moe/g/thread/66275371/#66289687

Press F12, go to console, paste this

window.location.href = document.querySelector('canvas').toDataURL('image/png');

and press enter then right click save it.

https://archived.moe/g/thread/66275371/#66289955

firefox doesn't work well with data uris in it's url bar.

you can try

(function() {
 const canvasImage = document.querySelector('canvas').toDataURL('image/png');
 document.body.innerHTML = `<img src="${canvasImage}">`;
})();

instead which will just display the image on the current page but lets you right click and save it.

That should make it not freeze.

>Would this work for sites with dozens of images on the same page?

It depends on the page and how they display the images.

Hydrus Network

IPFS

Note: This is not a section to post IPFS shares, but simply spread awareness that Hydrus supports it.

Hydrus Network supports IPFS. IPFS is a p2p protocol that makes it easy to share many sorts of data. The Hydrus client can communicate with an IPFS daemon to send and receive files.

Downloaders

Currently coom.tech host does not allow file uploads, so I provide a link. Please edit this page to embed the file directly if the host changes this in the future.

To import Downloaders with Hydrus Network,open Hydrus Network, click on the Network tab, click downloaders, then click import downloaders. A photo of Lain should pop up, save the downloader file, then drag it into the photo of lain.

ofans.party

Site: https://ofans.party/#/

Note: ofans.party has died (although the content has been saved I believe), however a site is being planned as a successor by the host of kemono.party. ofans.party Downloader: https://8chan.moe/.media/064149085f868b9778a6302b311d54c34a1c9ea781e5d91d85d3b5157f29fd59.png

Note: This downloader only works with gallery or subscription mode and if you're using the main site, Hydrus won't be able to recognize the URL.

Also note: This downloads from IPFS. What this means is that the content is distributed P2P, but I'm setting a gateway in the parser for compatibility (so you don't have to host your own node), specifically https://ipfs.io/ipfs/. Sometimes, the gateway won't have the content right away and may return a 504 because it did not get your content fast enough; try it again and it'll work eventually.

You can also choose another gateway, but you'll have to enter it manually in the parser, so I wouldn't recommend it. If you still decide to look into it, a list is available at https://ipfs.github.io/public-gateway-checker/.

gallery-dl

https://github.com/mikf/gallery-dl

Configuration files for gallery-dl use a JSON-based file format.

For a (more or less) complete example with options set to their default values, see gallery-dl.conf.

For a configuration file example with more involved settings and options, see gallery-dl-example.conf.

A list of all available configuration options and their descriptions can be found in configuration.rst.

gallery-dl searches for configuration files in the following places:

Windows:

  • %APPDATA%\gallery-dl\config.json
  • %USERPROFILE%\gallery-dl\config.json
  • %USERPROFILE%\gallery-dl.conf

(%USERPROFILE% usually refers to the user's home directory, i.e. C:\Users\\)

Linux, macOS, etc.:

  • /etc/gallery-dl.conf
  • ${HOME}/.config/gallery-dl/config.json
  • ${HOME}/.gallery-dl.conf

Values in later configuration files will override previous ones.

Command line options will override all related settings in the configuration file(s), e.g. using --write-metadata will enable writing metadata using the default values for all postprocessors.metadata.* settings, overriding any specific settings in configuration files.

  • To translate this to English, to use these settings you navigate to here and use the "ctrl + S" shortcut in your browser save this as the default filename ("gallery-dl.conf"). You can also save it by following the "gallery-dl.conf" hyperlink from the quote above, right-clicking the "Raw" button, then clicking "Save Link as..." in the ensuing dropdown menu (right-click text may vary depending on your browser). Then, assuming your "gallery-dl.exe" file is in "C:\Users\User", you put your "gallery-dl.conf" file in there as well.
  • For sites where you use a "cookies.txt" for authentication, you can export cookies from your browser using a browser extension. For Firefox I use cookies.txt, which provides the option to only export cookies specific to the current site. If you export cookies from a private browsing session or container tab, it produces two cookies.txt files. I believe you are fine to consolidate them by pasting the contents of one below the other. I am unfortunately uneducated on why it exhibits this behavior, or what the differences are between the files, if any.
  • To run the program, provided your "gallery-dl.exe" file is in "C:\Users\User", you just open cmd ("ctrl + R" to open "Run", type "cmd", then hit "enter", or you can search for "cmd" in Windows search and select it from the search results), then paste "gallery-dl [link]" to download from the specific link you included in your command (performing this action is considered running a "command"; what you just pasted is referred to as a "command")

Filename Templates

user_post id_image id_title_date

  • Where "image id" isn't included, "num" is added to the end, else the website doesn't allow multiple-image posts.
  • Exceptions from this filename structure include sites where a username isn't included (as distinguished from a "name" where the artist may change it far more often, so you shouldn't treat it as standardized), sites where filenames are discrepant from their title (in which case filenames are included), and sites where titles aren't included.
ArtStation
        "artstation":
        {
            "external": false,

            "directory": ["artstation", "{userinfo[username]}"],
            "filename": "{userinfo[username]}_{hash_id}_{asset[id]}_{title}_{date}.{extension}"
        },

example url:

https://www.artstation.com/artwork/QrrARd

default:

artstation_7744526_28467182_Kama

template:

ucupumar_QrrARd_28467182_Kama_2020-07-10 17_31_54

DeviantArt
        "deviantart":
        {
            "include": "gallery,scraps",
            "refresh-token": "cache",
                        "client-id": "placeholder",
            "client-secret": "placeholder",
            "flat": true,
            "folders": false,
            "journals": "html",
            "mature": true,
            "metadata": true,
            "original": true,
            "quality": 100,
            "extra": true,
            "wait-min": 0,
            "cookies": "C:\\Users\\User\\cookiesda.txt",
            "cookies-update": true,

            "directory": ["deviantart", "{author[username]}"],
            "filename": "{author[username]}_{index}_{title}_{date}.{extension}"
        },

example url:

https://www.deviantart.com/personalami/art/Valicia-868721085

default:

deviantart_868721085_Valicia

template:

PersonalAmi_868721085_Valicia_2021-01-30 05_20_24

  • Replace instances of "placeholder" with the appropriate value
  • Everything except the filename structure for this is sourced to here:
Mastodon
        "mastodon":
        {
            "mastodon.xyz":
            {
                "access-token": "cab65529..."
            },
            "tabletop.social": {
                "access-token": "513a36c6..."
            },

            "directory": ["mastodon", "{instance}", "{account[username]!l}"],
            "filename": "{category}_{account[username]}_{id}_{media[id]}_{date}.{extension}"
        },

example url:

https://baraag.net/@orenjipiiru/104419352335505520

default:

baraag_104419352335505520_10254929

template:

baraag_orenjipiiru_104419352335505520_10254929_2020-06-28 02_54_31

Newgrounds
        "newgrounds":
        {
            "postprocessors": [{
                "name": "metadata",
                "directory": "metadata"
            }],

            "directory": ["newgrounds", "{user}"],
            "filename": "{user}_{index}_{title}_{date}{num:?_//}.{extension}"
        },

example url:

https://www.newgrounds.com/art/view/sailoryon/yon-dream-buster

default:

newgrounds_1438673_Yon Dream Buster!

newgrounds_1438673_01_Yon Dream Buster!

template:

sailoryon_1438673_Yon Dream Buster!_2020-09-25 18_22_52

sailoryon_1438673_Yon Dream Buster!_2020-09-25 18_22_52_1

  • Ripping a newgrounds user page only rips the "art" section of their profile; if you want "movies" you will have to rip the "movies" page directly. But do note that movies tend to be of much higher filsizes than images.
  • Preserving the metadata of newgrounds uploads is particularly useful, because newgrounds heavily resizes and jpeg-compresses all images beyond the first if a user posts multiple at once. Some artists upload alt-versions of their images to 3rd-party hosting sites and link in the description to evade this.
Nijie
        "nijie":
        {
            "cookies": "C:\\Users\\User\\cookiesnj.txt",
            "cookies-update": true,

            "username": null,
            "password": null,

            "directory": ["nijie", "{artist_id}"],
            "filename": "{artist_id}_{image_id}_{date}_{num}.{extension}"
        },

example url:

https://nijie.info/view.php?id=162282

default:

162282_p0

template:

735_162282_Wed 16 Mar 2016 10_09_46 AM JST+0900_0

Patreon
        "patreon":
        {
            "directory": ["patreon", "{creator[vanity]}"],
            "filename": "{creator[vanity]}_{id}_{title}_{filename}_{date}_{num}.{extension}"
        },

example url:

https://www.patreon.com/posts/38128497

default:

38128497_June Print WIPs!_01

template:

fetalstar_38128497_June Print WIPs!_patreon-promoprint-wip3_2020-06-11 19_02_00_1

Piczel
        "piczel":
        {
            "directory": ["piczel", "{user[username]}"],
            "filename": "{user[username]}_{id}_{title}_{date}_{num}.{extension}"
        },

example url:

https://piczel.tv/gallery/image/25048

default:

piczel_25048_Hats_00

template:

GCFMug_25048_Hats_2020-02-18 05_48_01_0

Pillowfort
        "pillowfort":
        {
            "directory": ["pillowfort", "{username}"],
            "filename": "{username}_{post_id}_{id}{title:?_//}{filename:?_//}_{date}.{extension}"
        },

example url:

https://www.pillowfort.social/posts/1501710

default:

1501710 (sketches) Holo Cosplays Revy 01

1501710 (sketches) Holo Cosplays Revy 02

template:

Seraziel_Art_1501710_1040212_(sketches) Holo Cosplays Revy_Bonus Sketch 1_2020-07-01 03_26_08

Seraziel_Art_1501710_1040213_(sketches) Holo Cosplays Revy_bonus sketch 2_2020-07-01 03_26_08

  • Mind the path length for this template
  • For legibility when reading filenames in a list, you may want to move the "date" part of the template to before "title" and "filename", since everything preceding those rarely differ in path length
Seiga
        "seiga":
        {
            "cookies": "C:\\Users\\User\\cookiessg.txt",
            "cookies-update": true,

            "username": null,
            "password": null,

            "directory": ["seiga", "{user[id]}"],
            "filename": "{user[id]}_{image_id}{date:?_//}.{extension}"
        },

example url:

https://seiga.nicovideo.jp/seiga/im10635055

default:

seiga_10635055

template:

51170288_10635055_2020-11-04 03_37_00

  • As of writing this 2021-04-08, it appears ripping a seiga gallery doesn't preserve the date of any image, despite a direct link to an image post providing the date when ripped. For now I have used "{date:?_//}" to still fetch dates when direct ripped, but to standardize your filenames, you might unfortunately want to remove it to match your gallery rip until this is fixed, if ever, if even possible.
Twitter
        "twitter":
        {
            "replies": true,
            "retweets": false,
            "twitpic": false,
            "videos": true,

            "cookies": "C:\\Users\\User\\cookiestw.txt",
            "cookies-update": true,

            "directory": ["twitter", "{user[name]}"],
            "filename": "{user[name]}_{tweet_id}_{date}_{num}.{extension}"
        },

example url:

https://twitter.com/Himazin88/status/1353633551837589505

default:

1353633551837589505_1

template:

Himazin88_1353633551837589505_2021-01-25 09_19_22_1

  • When ripping from twitter, rip the "media" tab rather than just the plain twitter profile, as there have been accounts of this yielding more results (even without authentication)
  • source:
  • Twitter is unfortunately finicky and unreliable, and there have been times where I've found that a search should return results, but it doesn't, and I've missed results despite the query being formatted to include them example 1 example 1.1 example 2 example 2.1 (NSFW warning for both). So sometimes twitter appears to just fail. But if possible, please take note of how to reproduce the failure and share it with the appropriate persons.
Weasyl
        "weasyl":
        {
            "directory": ["weasyl", "{owner_login}"],
            "filename": "{owner_login}_{submitid}_{title}_{date}.{extension}",

            "api-key": "placeholder"
        },

example url:

https://www.weasyl.com/~fluffkevlar/submissions/1622631/ink-eyes

default:

1622631 Ink-Eyes

template:

fluffkevlar_1622631_Ink-Eyes_2018-04-13 01_37_40

  • Replace instance of "placeholder" with the appropriate value

Website Filename Only

Below are filename structures for sites where I personally found that just a "filename" filename was useful enough for me:

Furaffinity
        "furaffinity":
        {
            "postprocessors": [{
                "name": "metadata",
                "directory": "metadata"
            }],

            "descriptions": "html",
            "include": "gallery,scraps",

            "directory": ["furaffinity", "{user}"],
            "filename": "{filename}.{extension}",

            "cookies": "C:\\Users\\User\\cookiesfa.txt",
            "cookies-update": true
        },

example url:

https://www.furaffinity.net/view/12761971/

default:

12761971 Hearth Stone

template:

1392572291.amadnomoto_jaina

  • By default, ripping a furaffinity user page only rips the "gallery" section of their profile, so if you wanted "scraps" you had to rip the "scraps" page directly. The "include" line changes this to give you everything. If you wish to return to default behavior, simply remove this line.
  • Preserving the metadata of furaffinity uploads is particularly useful, because furaffinity often heavily resizes and jpeg-compresses images. Some artists upload full-res versions of their images to 3rd-party hosting sites and link in the description to evade this.
  • To avoid furaffinity descriptions being truncated in the metadata file, the "descriptions" line was necessary to include. Source 1.1, source 1.2
Hentai Foundry
        "hentaifoundry":
        {
            "directory": ["hentaifoundry", "{user}"],
            "filename": "{filename}.{extension}"
        },

example url:

https://www.hentai-foundry.com/pictures/user/noise/807617/Felicia20200517

default:

hentaifoundry_807617_Felicia20200517

template:

noise-807617-Felicia20200517

  • Ripping a hentai foundry user page only rips the "pictures" section of their profile; if you want "scraps" you will have to rip the "scraps" page directly.

Tagging With TMSU

Using tmsu, files download with gallery-dl can be tagged like hydrus tags its downloads. Place the following bash script in your path. I recommend ~/.local/bin/gdl-tag.

#!/bin/bash

function get_tags () {
  json=$1
  #     "part 1   'part 2
  query=".tags_$2"'//""|split(" ")[]|select(length > 0)'
  local -n tags=$3
  for tag in $(jq -r "$query" "$json") ; do
    tags+=($tag)
  done
}

function add_tag_type () {
  type=$1
  local -n tags=$2
  i=0
  for value in ${tags[@]} ; do
    tags[$i]="$type=$value"
    i=$((i + 1))
  done
}

image="$1"
json="$image.json"
shift

rating=$(jq -r .rating "$json")

tags_artist=()
get_tags "$json" artist tags_artist
add_tag_type creator tags_artist

tags_character=()
get_tags "$json" character tags_character
add_tag_type character tags_character

tags_copyright=()
get_tags "$json" copyright tags_copyright
add_tag_type series tags_copyright

tags_metadata=()
get_tags "$json" metadata tags_metadata
add_tag_type meta tags_metadata

tags_general=()
get_tags "$json" general tags_general


tags=()
if [ "$rating" != "null" ] ; then
  tags+=("rating=$rating")
fi
tags+=(${tags_artist[@]})
tags+=(${tags_character[@]})
tags+=(${tags_copyright[@]})
tags+=(${tags_metadata[@]})
tags+=(${tags_general[@]})
tmsu tag "$image" ${tags[@]}

Your ~/.config/gallery-dl/config.json should also set extractor.tags to true in addition to whatever else you have. The bare minimum config.json looks like this:

{
  "extractor": {
    "tags": true
  }
}

To test it out, try this. This will download 2 images and tag them using the gdl-tag script.

gallery-dl 'https://gelbooru.com/index.php?page=post&s=list&tags=tomari_%28veryberry00%29+shishiro_botan+' --write-metadata --exec 'gdl-tag {}' --range '1-2' --no-skip

Ugoira

Download ugoira from danbooru

Ugoira are downloaded as lossy-encoded WEBMs from danbooru by default. To save the original, add the following to your gallery-dl config:

{
  "extractor": {
    "danbooru": {
      "ugoira": true
    }
  }
}

Saving ugoira losslessly

To save ugoira works in a playable format (losslessly), use --ugoira-conv-copy or add the following postprocessor to your gallery-dl config:

{
  "extractor": {
    "postprocessors": [
      {
        "name": "ugoira",
        "extension": "mkv",
        "ffmpeg-args": ["-c", "copy", "-nostdin", "-y"],
        "ffmpeg-demuxer": "mkvmerge",
        "ffmpeg-output": false,
        "repeat-last-frame": false,
        "whitelist": ["pixiv", "danbooru"]
      }
    ]
  }
}

Note: --ugoira-conv-copy and the above postprocessor will delete the original zips. If this is important to you, add the below to your postprocessor config.

"keep-files": "true"

mpv

home page

"mpv is a free (as in freedom) media player for the command line. It supports a wide variety of media file formats, audio and video codecs, and subtitle types."

Stream Videos In mpv

You can "shift + right click" the folder mpv is in to gain access to the right click option "Open command window here". You can then run the command "mpv [link]" to stream any link in mpv instead of your browser.

  • mpv supports dragging links into it to stream video as well, however I personally have never succeeded at doing this.
  • FFmpeg is necessary for this functionality. Download here and drag the contents of the "bin" folder into the mpv directory, alongside "mpv.exe"

youtube-dl

home page

GitHub page

https://github.com/ytdl-org/youtube-dl/#description

"youtube-dl is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like"

  • FFmpeg is necessary for this. Download here and drag the contents of the "bin" folder into the youtube-dl directory, alongside "youtube-dl.exe"

Command Templates

Annotations, Description, Metadata, Subtitles, Thumbnail

source

youtube-dl -i --cookies youtube-dl_cookies.txt -o "C:\Users\User\youtube-dl output\%(title)s-%(id)s\%(title)s-%(id)s.%(ext)s" --write-description --write-info-json --write-annotations --write-sub --write-thumbnail https://www.youtube.com/watch?v=7E-cwdnsiow
  • This command has been amended from the template provided in the cited article to include using cookies (for age-restricted videos), continue on download errors (-i), and to specify an output folder
  • Change "User" to your respective user name
  • Change the youtube link to any link of your choosing (not every site is supported)
  • To run the program, provided your "youtube-dl.exe" file is in "C:\Users\User", you just open cmd ("ctrl + R" to open "Run", type "cmd", then hit "enter", or you can search for "cmd" in Windows search and select it from the search results), then paste the template command to download from the specific link you included in your command (performing this action is considered running a "command"; what you just pasted is referred to as a "command")

Tips

Fix "403: Forbidden"

soiurce

run the following command:

youtube-dl --rm-cache-dir