Miscellaneous Findings X: Catching up somewhat

This is a roundup of miscellaneous things that I’ve found out about (or have rediscovered). I put the findings that translate well to speech on my podcast, Small Findings. The rest (which are often technical findings), I put here.

Well, I forgot about doing this, so it’s been over a year since I’ve posted findings. So, here’s 17 of them!

Audio latency

In QJackCtl’s setup dialog, there are two params that affect audio latency:

  • Frames/period
  • Periods/buffer

If the sample rate is 44,100 Hz, then a frame takes 1/44,100 seconds. So, how long does it take to get a frame through the computer?

If frames/period is 1024 and periods/buffer is 2, each buffer carries 1024 * 2 frames. That’s 2048 frames. Since each frame takes 1/44,100 seconds, the buffer takes 2048/44,100 seconds = 46.44 ms to process.

TODO: What is involved in processing exactly, though?

So, sound input goes in, then at least 46.44 ms later, something comes out.

That’s kind of a lot! If you are playing a guitar into your computer, then having an amp modeler process it and send it back out, you may notice a delay which will throw you off. I read that every 10ms of latency corresponds with being ~10 feet further from an analog source. So, 10ms is acceptable for me.

When I set frames/period to 128 and frames/period to 2 to get a latency of 5.8 msec, I started getting xruns when using Guitarix. The computer was not keeping up with the incoming buffers, so it’d dump zeroes to the output, which sound like pops in the output.

I found out that my computer, by default, dynamically slows down the CPU to save power, which causes latency issues.

I was getting pops in my guitar amp modeler (and sometimes in pd), and that seemed to take care of it.

I was clued in by the install script for Ardour, which said this:

!!! WARNING !!! - Your system seems to use frequency scaling.
This can have a serious impact on audio latency.
For best results turn it off, e.g. by chosing the ‘performance’ governor.”

So, even though that app crashes when I try to create a project in it, I’m thankful to it.

There’s this indicator-cpufreq package you can install, which puts a thing in the system menu bar to turn that off!

sudo apt-get install indicator-cpufreq

When I turn set the governor to “Performance” via that applet, I don’t get the pops anymore.

#cpu #frequencyscaling #audio #ubuntu #jack

C++

Complex numbers

C (not jjust C++) has supported complex numbers since 1999!

creal and cimag will get you the components of a complex number.

Mapping

std::transform seems to be the closest thing to JS map. e.g. Plucking the imaginary value from an array of complex values:

 static void getImaginary(const array<complex>& inComplexVals, array<float>& outimagVals) {
    transform(inComplexVals.begin(), inComplexVals.end(), outimagVals.begin(), [](Complex<float> c) -> float { return c.imag(); });
}

In C++ 11, you must specify the range to operate on in the input container (the first two params). Later versions, let you skip that.

Anonymous inline functions

(By “inline,” I mean that the function can be defined in the body of another function, not that they’ll be inlined.) These were introduced in C++ 2011. They go like this:

    [](type param) -> returnType { <function body> }

The [] at the beginning is necessary for the compiler to parse it as an anonymous function.

Misleading overload compiler errors

Sometimes an error like:

    error: cannot resolve overloaded function ‘imag’ based on conversion to type ‘std::array<float, 1024>::value_type’ {aka ‘float’}

In reference to code like:

    imagVals[i] = complexFFTData[i].imag;

Just means that you used a function as a value. For example, here’s the fix:

Appending to files

When calling ofstream::open or the constructor, pass std::ios_base::app as the second param. (The ios_base namespace refers to input-output streams, not to iOS. app is append, not application.)

Terms

  • POD: plain old data

const

  • When you create an object inline in a function call, that object is implicitly const.

Cat health

Glucosamine

Glucosamine is labeled with a disclaimer that the FDA cannot verify that it actually helps with joint health.

Dosage

This cat-branded glucosamine recommends a dosage of 125-250 mg per day. Regular glucosamin comes in pills that are 750 mg each.

  • Cat-branded glucosamine has 125 mg glucosamine hydrochloride per 100 mg sodium chondroitin sulfate and 1 mg manganese.
    • It’s $17.95 for 80 capsules; $0.001795 per mg of glucosamine.
  • Generic human glucosamine has the same ratio: 750 mg glucosamine hydrochloride per 600 mg sodium chondroitin sulfate. No manganese, though.
    • It’s $17.99 for 180 capsules; $0.000133259 per mg of glucosamine. Cat-branded glucosamine is 13.47 times more expensive than regular glucosamine.

Pimobendan

  • https://en.wikipedia.org/wiki/Pimobendan
    • In normal hearts it increases the consumption of oxygen and energy to the same degree as dobutamine but in diseased hearts it may not.[4][5] Pimobendan also causes peripheral vasodilation by inhibiting the function of PDE3. This results in decreased resistance to blood flow through systemic arterioles, which decreases afterload (decreases the failing heart’s workload) and reduces the amount of mitral regurgitation.[6][7]

  • 1.25mg twice a day, one hour before meals.
    • Chewy does not sell tablets in this size. They have the liquid kind, though.
    • Cost from Porter Sq.: ?

Amlodipine

Eye massagers

Finding out what’s using a port

There’s lsof -i TCP, but that sometimes misses it. Maybe that’s for active connections?

This seems better:

ss -lntp

It should list the processes bound to TCP ports. But sometimes the process field is blank. If you run it with sudo, you’ll probably be able to see the processes.

#port #unix #network

Flickr

Making all public photos non-public

  • Go to Organizr.
  • Select “Only show public content” in the bottom bar.
  • Select all. Drag the bottom bar selection to the workspace.
  • Go to the permissions menu and change the permissions for those photos.
  • Seems to run for about an hour (getting through 3000 photos) before it fails, and you have to start over.

Getting Noita to run on Ubuntu

  • Get the game from GOG, not Steam.
  • Run it with wine. wine noita.exe

Getting the Switch Pro Controller to work with it

  • Install https://github.com/nicman23/dkms-hid-nintendo
  • Install https://github.com/DanielOgorchock/joycond
  • Turn on Bluetooth in Ubuntu.
  • Hit the small circle button on the controller next to the USB port to put it in pairing mode.
  • Pair with it from the Bluetooth system panel in Ubuntu. The green lights on the bottom will “move” left and right.
  • In Noita, go to Settings | Input and Choose “Nintendo Switch Pro Controller (js)”
  • Press the shoulder buttons on the controller together to complete the pairing on the controller side.
  • You may have to start a new game for these settings to take effect.

Wine issues

One of the possible causes of Noita running really slowly is Wine not being able to use an Intel graphics tool. Once, after a Wine update, I got this in the console:

MESA-INTEL: warning: Performance support disabled, consider sysctl dev.i915.perf_stream_paranoid=0

Here, it’s talking about trying to use a graphics analysis tool. Does this matter? In one case, I ran sudo apt-get install git libvulkan1 mesa-vulkan-drivers vulkan-utils and sysctl -n dev.i915.perf_stream_paranoid, Noita ran at normal speed again. But there was another time that it didn’t.

Though the next time you reboot, the warning comes back, so maybe it’s working putting it in init.something?

Was installing the Vulkan stuff really what fixed it? It is a mystery.

Intel has a long article about setting up the tool, which involves compiling the Linux kernel somehow? Fortunately, that wasn’t necessary for me.

ntlm_auth

0118:err:winediag:ntlm_check_version ntlm_auth was not found or is outdated. Make sure that ntlm_auth >= 3.0.25 is in your path. Usually, you can find it in the winbind package of your distribution. 0118:err:ntlm:ntlm_LsaApInitializePackage no NTLM support, expect problems

Reintalling winbind gets rid of that error:

sudo apt-get remove winbind && sudo apt-get install winbind

#game #switch

Getting apps to show up in Ubuntu search

You know how when you hit ctrl+space, you can type an app name and then launch it from there? Sometime an app won’t show up there if you’ve installed it as an AppImage or just downloaded it to run from wherever you put it instead of using a Snap.

That’s not the actual reason, it’s not showing up in desktop search, though. It’s because it doesn’t have a .desktop file.

You can look at a bunch of .desktop files in /usr/share/applications, then copy and edit one of them to make a new one.

Here is a minimal one that I made for Obsidian, named obsidian.desktop, which I run as an AppImage because there was no Snap when I first got it:

[Desktop Entry]
Name=Obsidian
GenericName=Notes organizer
Comment=Organize notes
Exec=/home/myuserdir/apps/Obsidian-1.0.3.AppImage
Terminal=false
Type=Application
Keywords=notes;organizer;
Icon=/home/myuserdir/apps/obsidian-meta/obsidian-icon-256.png
Categories=Utility;TextEditor;
StartupNotify=false
MimeType=text/english;text/plain;text/x-markdown;

Now it shows up, and I don’t have to open a new terminal tab to launch it. I’ve done the same for Firefox because I’m now just using a version downloaded from its site to avoid the [issue in which the Snap image makes the mouse cursor really small on Ubuntu 22.04.1]

#ubuntu

Caching in GitHub Actions

Standard dependency installs

Most of the official setup GitHub Actions that set up an environment for a language have their own caching features. For example, setup-python V4 lets you cache pip installs by specifying with: cache: pip when you use that action.

Here is an example config for a Python job:

jobs:
  run-job:
    runs-on: ubuntu-latest
    timeout-minutes: 2
    steps:
      - name: ${{ github.event.client_payload.id }}
        run: echo run identifier ${{ github.event.client_payload.id }}
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: "3.9"
          cache: pip
      - run: |
          if [ -f requirements.txt ]; then pip install -r requirements.txt; fi

Arbitrary caching

In the case that an action needs to download an artifact (like statistical models in the case of NLTK), GitHub Actions provides a way to cache arbitary things.

There is an actions/cache@v3 action, which takes key and path parameters. It will check for a cache with the key specified your action. If it can’t find one, it will set a flag that you can use and (counterintuitively, IMO) create a new cache for that key.

Implicitly (as far as I can tell so far), if your job completes successfully, it will save what’s at path to the cache. So, in the case of a cache miss, your action has an obligation to download whatever it needs and put it at path so that it gets cached for the next run. The action can check for a cache miss with if: steps.<the id of the step in which you use actions/cache@v3>.outputs.cache-hit != 'true'.

Here is an example:

  - name: Restore cached nltk_data
    id: restore-nltk-cache
    uses: actions/cache@v3
    with:
      path: |
        ~/nltk_data
      key: nltk_data
  - name: Install nltk_data on cache miss
    run: |
      NLTK_DATA=~/nltk_data python -m nltk.downloader punkt
      NLTK_DATA=~/nltk_data python -m nltk.downloader stopwords
    if: steps.restore-nltk-cache.outputs.cache-hit != 'true'

If it does find a cache for key, then it puts the cache’s content at path.

Restoring a 27 MB cache took less than a second, as of 2022-07-22.

Data privacy policies

Microsoft and Google

One thing that’s better about using Excel instead of Google Sheets (other than better features, of which I don’t really know how to take advantage) is the data policy:

This data does not include your name or email address, the content of your files, or information about apps unrelated to Microsoft 365 or OneDrive.

Google’s privacy policy is more complex and includes:

To provide services like spam filtering, virus detection, malware protection and the ability to search for files within your individual account, we process your content.

(Emphasis added.)

Atlassian

https://www.atlassian.com/legal/privacy-policy#how-we-use-information-we-collect

Our Services also include tailored features that personalize your experience, enhance your productivity, and improve your ability to collaborate effectively with others by automatically analyzing the activities of your team to provide search results, activity feeds, notifications, connections and recommendations that are most relevant for you and your team.

Dropbox

https://www.dropbox.com/privacy

We collect and use the following information to provide, improve, protect, and promote our Services.

Account information. We collect, and associate with your account, the information you provide to us when you do things such as sign up for your account, upgrade to a paid plan, and set up two-factor authentication (like your name, email address, phone number, payment info, and physical address).

Your Stuff. Our Services are designed as a simple and personalized way for you to store your files, documents, photos, comments, messages, and so on (“Your Stuff”), collaborate with others, and work across multiple devices and services. To make that possible, we store, process, and transmit Your Stuff as well as information related to it. This related information includes your profile information that makes it easier to collaborate and share Your Stuff with others, as well as things like the size of the file, the time it was uploaded, collaborators, and usage activity.

What about the contents of “Your Stuff”?

https://www.pcworld.com/article/447666/dropbox-takes-a-peek-at-files.html https://arstechnica.com/tech-policy/2014/03/dropbox-clarifies-its-policy-on-reviewing-shared-files-for-dmca-issues/

2013: employee access is intended to be restricted.

But what about process access?

Googlebot and avoiding indexing content from your SPA

In their suggestions for handling missing content in SPAs), Google recommends dynamically adding a meta tag to the page like so:

fetch(`https://api.kitten.club/cats/${id}`)
 .then(res => res.json())
 .then((cat) => {
   if (!cat.exists) {
     const metaRobots = document.createElement('meta');
     metaRobots.name = 'robots';
     metaRobots.content = 'noindex';
     document.head.appendChild(metaRobots);
   }
 });

This strongly implies that the crawler actually will notice DOM changes after the page has been loaded, including <meta> tags.

#crawling #javascript

Area high schools

BB&N

International School of Boston

  • $41,000 per year!
  • They learn French?

Newton North

  • Why is it good?

Belmont High

  • Student-teacher ratio isn’t amazing: 17:1.

Acton-Boxborough

Lincoln-Sudbury

Medford High

  • 10:1 student-teacher ratio?
  • Culinary program
  • Diverse
  • Uneven teacher quality? CS teacher unable to code, for example.
    • “The entire district has a bullying problem and no school except for the Brooks does anything about it.”
  • Stabbing

Fayerweather Street School

NuVu

  • $48K?!
  • Lots of studio time.

Cambridge Friends School

  • $27K!
  • 5:1

Shady Hill

  • 5:1
  • $30-$50K!

Serving local web apps via https

If you need to serve a local web page via https:

Create a cert

https://letsencrypt.org/docs/certificates-for-localhost/

openssl req -x509 -out localhost.crt -keyout localhost.key \
  -newkey rsa:2048 -nodes -sha256 \
  -subj '/CN=localhost' -extensions EXT -config <( \
   printf "[dn]\nCN=localhost\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:localhost\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")

Serving via Python

  • Mash the crt and key files together: cat localhost.crt localhost.key > localhost.pem`
  • Create a server script like so: `from http.server import HTTPServer, SimpleHTTPRequestHandler import ssl

httpd = HTTPServer((“localhost”, 443), SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket( httpd.socket, certfile=”./localhost.pem”, server_side=True ) httpd.serve_forever() `

  • Start it with sudo python3 server.py.

Serving via Node

https://stackoverflow.com/a/61905546/87798 (Ignore the brew, mkcert stuff)

  • Rename the crt and key files to cert.pem and key.pem.
  • Run the http-server module with `http-server -P 443 -S -C cert.pem -o’

Moonlander

Wally getting stuck after you’re supposed to reset the keyboard

When this happened to me, it was on a new computer. I never added the udev rule file as described here:

https://github.com/zsa/wally/wiki/Linux-install

After I did that, it worked!

#keyboard #ubuntu

Recording internal audio

https://askubuntu.com/questions/250073/how-to-record-any-internal-sound-in-and-out-using-ubuntu-and-audacity

As soon as we record any audio stream the name of the recording application and the source from where it records will be shown in the Recording tab. We then may be able to change the source to Monitor of <your soundcard> to record the output of our soundcard:

  • First, start recording in Audacity.
  • Then, in pavucontrol, in the Recording tab, in the Audacity entry, select “Monitor of soundcard” (instead of soundcard).

Now it’s recording the output of the soundcard instead of the mic connected to the soundcard.

#ubuntu #audio

find

Excluding dirs from find

Say you want to find all of the directories in which you have a rollup.config.js file.

You could try

find . -name rollup.config.js

But that would also search all of the node_modules directories, and you mean to find projects in which you’ve used rollup directly.

You can add -path and -not to exclude them from the search:

find . -name rollup.config.js -and -not \( -path '*/node_modules/*' \) -prune

Finding files using partial filenames

find . -name "appmanifest*"

Not:

find . -name "appmanifest.*"

It’s not a “normal” regex. The above command would take the . literally.

Deleting

There’s a -delete flag.

find . -name "*.pdf" -delete

#search #unix #command #find

Setting up a new computer running Ubuntu

  • Move over your private key.
  • Install git.
  • Clone password store.
  • Install pass.
  • Clone dotfiles to ~.
  • sudo apt install zsh
  • chsh -s /bin/zsh
    • You need to log out and log back in for this to take effect. You can’t just close and reopen a terminal.
  • In terminal preferences:
    • Change the copy shortcut to alt+c
    • Change the epaste shortcut to alt+v
  • sudo apt install curl
  • Install [[neovim]].
  • Install ssh, then rsync your old home directory over.
  • Install make.
  • Install lame.
  • Uninstall ssh.
  • sudo apt install gnome-tweaks
  • Set up the compose key in settings. (It’s scroll lock in your keyboard config.)

장조림(Jangjorim)

From Mom:

Skirt steak or 결이 있는 good beef stew cut in to big chunk. Put it in big pot. Minced Garlic, cubic onions,green onions, soy sauce and sugar. Do not put water in it. Cook it medium high for 20 minutes do not burning until all the water is gone.

OK! How much sugar for how much beef, about?

Depend on beef. 1/2 Lbs beef 3 table spoons 1 table spoon. Depend on your taste. Add 1/2 tea spoon black pepper.

#cooking

rsync

Dealing with spaces in the path

Use --protect-args. e.g.:

rsync -avz --protect-args pi@midnight-giant:"/mnt/storage/media/music/Pikmin Sound Team/Pikmin 3 Deluxe" .

Keeping local files even if they’re missing on remote

It does this by default!

https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories

In order to keep two directories truly in sync, it’s necessary to delete files from the destination directory if they are removed from the source. By default, rsync does not delete anything from the destination directory.

You can change this behavior with the --delete option.

#rsync #command

neovim

Installing plugins

  • Add the plugin to this section in ~/.config/nvim/init.nvim:

      call plug#begin()
        Plug 'neomake/neomake'
        Plug 'machakann/vim-highlightedyank'
        Plug 'tmhedberg/SimpylFold'
        Plug 'w0rp/ale'
        Plug 'fisadev/vim-isort'
        Plug 'psf/black', { 'branch': 'stable' }
      call plug#end()
    
  • Restart nvim, then run :PlugInstall.

  • Run :CheckHealth to make sure things are OK.

Making a video from images in multiple directories

https://stackoverflow.com/a/37478183/87798

ffmpeg -framerate 30 -pattern_type glob -i '**/*.JPG' \
  -c:v libx264 -pix_fmt yuv420p out.mp4

Rotating video

`ffmpeg -i input.mp4 -vf “transpose=1, transpose=1” output.mp4

The above rotates the input 90° twice. transpose=0 rotates it counterclockwise.

"rotate=PI:bilinear=0" will rotate it 180° and turn off bilinear interpolation to avoid blurriness.

Scaling down

ffmpeg big2840.mp4 -vf scale=3840:1080 -c:v libx264 -crf 20 -preset slow smaller.mp4

The scale value is the target size.

wget

wget -m and robots=off https://the-site-you-want

#bash

Zip

-j gets it to not put the directory structure into the zip file. -x <pattern> makes it ignore files matching the pattern.

DOM

If you parse an SVG document into a tree, then append one of the child nodes to the browser’s current DOM, then the node disappears from the source tree. Sort of a move, rather than a copy.

unrar

If you get “no files to extract”, the problem may actually be that it doesn’t have permission to create a file or directory, rather that there being no files to extract in the rar file.

Unraring from multiple subdirectories

Remove the any unwanted subdirectories first, e.g. “Sample”.

find . -name Sample -exec rm -rf {} \;

Run this script:

#!/bin/bash

for dir in ./*
do
    unrar e "${dir}/*.rar"
done

#rar #command #shell

Tail calls

If a function ends by calling itself, that’s called a tail call.

It can usually be optimized by converting the function to just use a loop (or jump in assembly) instead of putting up another stack frame for another function call.

#programming