Tricks

From coom.tech
Jump to navigation Jump to search

Tips and tricks for improving your cooming experience. This page is meant to house information such as user-scripts, tips and tricks on using software found on the other pages, and other suggestions for improving software efficiency or showing little known features.

Scripts[edit]

Reminder to always take caution when entering scripts that you do not understand. We ask familiar users to peer review these scripts as well for malicious intent, in which case edit and remove. When submitting, please add a short description of what your script does, and how to run it.

Shell Scripts[edit]

JAV[edit]

Stream[edit]

The JAV family of shell functions and scripts streams or downloads JAV videos from javhdporn.net from the command line. The original JAV script was a shell function posted by an anon in /jp/jav (Found directly below).

Just enter the desired JAV ID number, which can be found on sites such as JAV Library and R18/DMM.

jav () {
  if [ -z "$1" ] ; then
    read name
  else
    name="$1"
  fi
  links="$(curl -s "$(curl -s "https://www2.javhdporn.net/video/$name" | grep "embedURL" | grep -o "{.*}" | jq '.["@graph"]' | jq -r '.[].embedURL' | sed '/^null$/d' | sed 's/\/v\//\/api\/source\//')" --data-raw 'r=&d=javmvp.com' | jq -r '.data[] | select(.label | contains("720p", "480p","360p")).file' | tail -n1)"
  mpv "$links"
}
Derivatives[edit]

Many derivatives have been made by anons in /g/cumg since then.

Download[edit]

For downloading, refer to Piracy and Miscellaneous/Downloaders wiki pages for tools and sites.

Erotic Audio[edit]

Download[edit]

Eraudica[edit]

A Powershell script for a site-rip download of eraudica.com was posted in /g/cumg. (A rewrite of this in POSIX sh would be appreciated).
To run on *nix, download the Powershell shell (pwsh), set -x on this script, and just run it.
This script downloads as such: /Title/Eraudica - Title.mp3

Function Scrape-Page{
Param ($url)
  $links = ($url.links | Where-Object {$_.href -match "/e/eve/(\d{4})"})
  $links | foreach-object -parallel {
    $u = "https://eraudica.com$($_.href)"
    $page = (iwr -usebasicparsing "$u")
    $matches = ($page.content | sls 'var title = "(.*)";').matches
    if ($matches -ne $null) {
      $title = [regex]::unescape(($page.content | sls 'var title = "(.*)";').matches.groups[1].value)
      write-host "Fetching $title"
      $asset = ((($page.content | sls "audioInfo = (.*);").matches.groups[1].value) | convertfrom-json).mediaurl
      $dir = $title.Split([IO.Path]::GetInvalidFileNameChars()) -join '_'

      if (!(test-path -path $dir)) {
        $obj = (new-item -path "$dir" -itemtype directory)
        $realname = $obj.Name
        $outpath = [IO.Path]::Combine("$($realname)", "Eraudica - $($realname).mp3")
        iwr -usebasicparsing $asset -OutFile $outpath -skiphttperrorcheck
      }
    }
  } -throttlelimit 10
}

$base = "https://eraudica.com/"
$root = iwr -usebasicparsing $base
$pages = [int](($root.content | sls 'Page 1 of (\d+)').matches.groups[1].value)

Scrape-Page($root)

for ($i = 1; $i -lt $pages; $i++) {
  $url = "$($base)e/eve?page=$i"
  write-host "Scraping $url"
  $root = iwr -usebasicparsing $url
  Scrape-Page($root)
}

Software[edit]

Hydrus Network[edit]

IPFS[edit]

Note: This is not a section to post IPFS shares, but simply spread awareness that Hydrus supports it.

Hydrus Network supports IPFS. IPFS is a p2p protocol that makes it easy to share many sorts of data. The Hydrus client can communicate with an IPFS daemon to send and receive files.

Downloaders[edit]

Currently coom.tech host does not allow file uploads, so I provide a link. Please edit this page to embed the file directly if the host changes this in the future.

To import Downloaders with Hydrus Network,open Hydrus Network, click on the Network tab, click downloaders, then click import downloaders. A photo of Lain should pop up, save the downloader file, then drag it into the photo of lain.

ofans.party[edit]

Site: https://ofans.party/#/ ofans.party Downloader: https://8chan.moe/.media/064149085f868b9778a6302b311d54c34a1c9ea781e5d91d85d3b5157f29fd59.png

Note: This downloader only works with gallery or subscription mode and if you're using the main site, Hydrus won't be able to recognize the URL.

Also note: This downloads from IPFS. What this means is that the content is distributed P2P, but I'm setting a gateway in the parser for compatibility (so you don't have to host your own node), specifically https://ipfs.io/ipfs/. Sometimes, the gateway won't have the content right away and may return a 504 because it did not get your content fast enough; try it again and it'll work eventually.

You can also choose another gateway, but you'll have to enter it manually in the parser, so I wouldn't recommend it. If you still decide to look into it, a list is available at https://ipfs.github.io/public-gateway-checker/.

gallery-dl[edit]

To quote the gallery-dl github page:

https://github.com/mikf/gallery-dl


Configuration files for gallery-dl use a JSON-based file format.

For a (more or less) complete example with options set to their default values, see gallery-dl.conf.

For a configuration file example with more involved settings and options, see gallery-dl-example.conf.

A list of all available configuration options and their descriptions can be found in configuration.rst.


gallery-dl searches for configuration files in the following places:

Windows:

  • %APPDATA%\gallery-dl\config.json
  • %USERPROFILE%\gallery-dl\config.json
  • %USERPROFILE%\gallery-dl.conf

(%USERPROFILE% usually refers to the user's home directory, i.e. C:\Users\<username>\)


Linux, macOS, etc.:

  • /etc/gallery-dl.conf
  • ${HOME}/.config/gallery-dl/config.json
  • ${HOME}/.gallery-dl.conf


Values in later configuration files will override previous ones.

Command line options will override all related settings in the configuration file(s), e.g. using --write-metadata will enable writing metadata using the default values for all postprocessors.metadata.* settings, overriding any specific settings in configuration files.


  • To translate this to English, to use these settings you navigate to here and use the "ctrl + S" shortcut in your browser save this as the default filename ("gallery-dl.conf"). You can also save it by following the "gallery-dl.conf" hyperlink from the quote above, right-clicking the "Raw" button, then clicking "Save Link as..." in the ensuing dropdown menu (right-click text may vary depending on your browser). Then, assuming your "gallery-dl.exe" file is in "C:\Users\User", you put your "gallery-dl.conf" file in there as well.
  • For sites where you use a "cookies.txt" for authentication, you can export cookies from your browser using a browser extension. For Firefox I use cookies.txt, which provides the option to only export cookies specific to the current site. If you export cookies from a private browsing session or container tab, it produces two cookies.txt files. I believe you are fine to consolidate them by pasting the contents of one below the other. I am unfortunately uneducated on why it exhibits this behavior, or what the differences are between the files, if any.

Filename Templates[edit]

user_post id_image id_title_date[edit]

  • Where "image id" isn't included, "num" is added to the end, else the website doesn't allow multiple-image posts.
  • Exceptions from this filename structure include sites where a username isn't included (as distinguished from a "name" where the artist may change it far more often, so you shouldn't treat it as standardized), sites where filenames are discrepant from their title (in which case filenames are included), and sites where titles aren't included.
DeviantArt[edit]
"deviantart":
		{
			"include": "gallery,scraps",
			"refresh-token": "cache",
                        "client-id": "placeholder",
			"client-secret": "placeholder",
			"flat": true,
            "folders": false,
            "journals": "html",
            "mature": true,
            "metadata": true,
            "original": true,
		    "quality": 100,
		    "extra": true,
            "wait-min": 0,
			"cookies": "C:\\Users\\User\\cookiesda.txt",
			"cookies-update": true,
			
			"directory": ["deviantart", "{author[username]}"],
			"filename": "{author[username]}_{index}_{title}_{date}.{extension}"
		},

example url:

https://www.deviantart.com/personalami/art/Valicia-868721085


default:

deviantart_868721085_Valicia


template:

PersonalAmi_868721085_Valicia_2021-01-30 05_20_24


  • Replace instances of "placeholder" with the appropriate value
  • Everything except the filename structure for this is sourced to here:
Mastodon[edit]
		"mastodon":
        {
            "mastodon.xyz":
            {
                "access-token": "cab65529..."
            },
            "tabletop.social": {
                "access-token": "513a36c6..."
            },

            "directory": ["mastodon", "{instance}", "{account[username]!l}"],
            "filename": "{category}_{account[username]}_{id}_{media[id]}_{date}.{extension}"
        },

example url:

https://baraag.net/@orenjipiiru/104419352335505520


default:

baraag_104419352335505520_10254929


template:

baraag_orenjipiiru_104419352335505520_10254929_2020-06-28 02_54_31

Newgrounds[edit]
		"newgrounds":
        {
            "postprocessors": [{
                "name": "metadata",
                "directory": "metadata"
            }],
			
			"directory": ["newgrounds", "{user}"],
            "filename": "{user}_{index}_{title}_{date}{num:?_//}.{extension}"
        },

example url:

https://www.newgrounds.com/art/view/sailoryon/yon-dream-buster


default:

newgrounds_1438673_Yon Dream Buster!

newgrounds_1438673_01_Yon Dream Buster!


template:

sailoryon_1438673_Yon Dream Buster!_2020-09-25 18_22_52

sailoryon_1438673_Yon Dream Buster!_2020-09-25 18_22_52_1


  • Ripping a newgrounds user page only rips the "art" section of their profile; if you want "movies" you will have to rip the "movies" page directly. But do note that movies tend to be of much higher filsizes than images.
  • Preserving the metadata of newgrounds uploads is particularly useful, because newgrounds heavily resizes and jpeg-compresses all images beyond the first if a user posts multiple at once. Some artists upload alt-versions of their images to 3rd-party hosting sites and link in the description to evade this.
Nijie[edit]
        "nijie":
        {
			"cookies": "C:\\Users\\User\\cookiesnj.txt",
			"cookies-update": true,
		
			"username": null,
            "password": null,
			
			"directory": ["nijie", "{artist_id}"],
            "filename": "{artist_id}_{image_id}_{date}_{num}.{extension}"
        },

example url:

https://nijie.info/view.php?id=162282


default:

162282_p0


template:

735_162282_Wed 16 Mar 2016 10_09_46 AM JST+0900_0

Piczel[edit]
		"piczel":
        {
			"directory": ["piczel", "{user[username]}"],
            "filename": "{user[username]}_{id}_{title}_{date}_{num}.{extension}"
        },

example url:

https://piczel.tv/gallery/image/25048


default:

piczel_25048_Hats_00


template:

GCFMug_25048_Hats_2020-02-18 05_48_01_0

Pillowfort[edit]
		"pillowfort":
        {
            "directory": ["pillowfort", "{username}"],
            "filename": "{username}_{post_id}_{id}_{title}_{filename}_{date}.{extension}"
        },

example url:

https://www.pillowfort.social/posts/1501710


default:

1501710 (sketches) Holo Cosplays Revy 01

1501710 (sketches) Holo Cosplays Revy 02


template:

Seraziel_Art_1501710_1040212_(sketches) Holo Cosplays Revy_9bb7e1918624_Bonus Sketch 1_2020-07-01 03_26_08

Seraziel_Art_1501710_1040213_(sketches) Holo Cosplays Revy_31a2d8ce76c5_bonus sketch 2_2020-07-01 03_26_08


  • Mind the path length for this one. You may want to remove "filename" regardless because of the string of garbage preceding it.
Seiga[edit]
        "seiga":
        {
            "cookies": "C:\\Users\\User\\cookiessg.txt",
			"cookies-update": true,
		
			"username": null,
            "password": null,
			
			"directory": ["seiga", "{user[id]}"],
            "filename": "{user[id]}_{image_id}{date:?_//}.{extension}"
        },

example url:

https://seiga.nicovideo.jp/seiga/im10635055


default:

seiga_10635055


template:

51170288_10635055_2020-11-04 03_37_00


  • As of writing this 2021-04-08, it appears ripping a seiga gallery doesn't preserve the date of any image, despite a direct link to an image post providing the date when ripped. For now I have used "{date:?_//}" to still fetch dates when direct ripped, but to standardize your filenames, you might unfortunately want to remove it to match your gallery rip until this is fixed, if ever, if even possible.
Twitter[edit]
        "twitter":
        {
            "replies": true,
            "retweets": false,
            "twitpic": false,
            "videos": true,
			
			"cookies": "C:\\Users\\User\\cookiestw.txt",
			"cookies-update": true,
			
			"directory": ["twitter", "{user[name]}"],
            "filename": "{user[name]}_{tweet_id}_{date}_{num}.{extension}"
        },

example url:

https://twitter.com/Himazin88/status/1353633551837589505


default:

1353633551837589505_1


template:

Himazin88_1353633551837589505_2021-01-25 09_19_22_1


  • When ripping from twitter, rip the "media" tab rather than just the plain twitter profile, as there have been accounts of this yielding more results (even without authentication)
  • source:


  • Twitter is unfortunately finicky and unreliable, and there have been times where I've found that a search should return results, but it doesn't, and I've missed results despite the query being formatted to include them example 1 example 1.1 example 2 example 2.1 (NSFW warning for both). So sometimes twitter appears to just fail. But if possible, please take note of how to reproduce the failure and share it with the appropriate persons.
Weasyl[edit]
		"weasyl":
        {
            "directory": ["weasyl", "{owner_login}"],
            "filename": "{owner_login}_{submitid}_{title}_{date}.{extension}",
			
			"api-key": "placeholder"
        },

example url:

https://www.weasyl.com/~fluffkevlar/submissions/1622631/ink-eyes


default:

1622631 Ink-Eyes


template:

fluffkevlar_1622631_Ink-Eyes_2018-04-13 01_37_40


  • Replace instance of "placeholder" with the appropriate value

Website Filename Only[edit]

Below are filename structures for sites where I personally found that just a "filename" filename was useful enough for me:

Furaffinity[edit]
		"furaffinity":
        {
			"postprocessors": [{
                "name": "metadata",
                "directory": "metadata"
            }],
			
			"directory": ["furaffinity", "{user}"],
            "filename": "{filename}.{extension}",
			
			"cookies": "C:\\Users\\User\\cookiesfa.txt",
			"cookies-update": true
        },

example url:

https://www.furaffinity.net/view/12761971/


default:

12761971 Hearth Stone


template:

1392572291.amadnomoto_jaina


  • Ripping a furaffinity user page only rips the "gallery" section of their profile; if you want "scraps" you will have to rip the "scraps" page directly.
  • Preserving the metadata of furaffinity uploads is particularly useful, because furaffinity often heavily resizes and jpeg-compresses images. Some artists upload full-res versions of their images to 3rd-party hosting sites and link in the description to evade this.
Hentai Foundry[edit]
		"hentaifoundry":
        {
			"directory": ["hentaifoundry", "{user}"],
            "filename": "{filename}.{extension}"
        },

example url:

https://www.hentai-foundry.com/pictures/user/noise/807617/Felicia20200517


default:

hentaifoundry_807617_Felicia20200517


template:

noise-807617-Felicia20200517


  • Ripping a hentai foundry user page only rips the "pictures" section of their profile; if you want "scraps" you will have to rip the "scraps" page directly.