On topic of the self-organization of different creatures in swarms and algorithms for modeling such behavior:
1. GIF is from an experiment by a Harvard Self-Organizing Systems Research Group (ssr.seas.harvard.edu/); they made many very simple identical robots and tested swarm algorithms on them, forcing them to form the desired configurations. Video (youtube.com/watch?v=xK54Bu9HFR).
2. A team from the Max Planck Institute of Animal Behavior made the DeepPoseKit library, which uses object and pose recognition with neural networks to track the swarming behavior of animals and insects. Code (github.com/jgraving/deepposeki), article (elifesciences.org/articles/479).
3. The team of Alexander Mordvintsev (author of DeepDream) is studying differentiable cellular automata, where each cell is a small neural network interacting with neighbors, and all together, they are able to form a global configuration and restore it from damage. Interactive demo (distill.pub/2020/growing-ca/), short video (youtube.com/watch?v=bXzauli1Ty).

Almost 100 years ago, Wolfgang Koehler conducted his famous experiment on sound symbolism (en.wikipedia.org/wiki/Bouba/ki). People were shown two pictures (the top row) and were asked to choose which of them was "baluba" and which was "takete." The majority of people chose a rounded baluba and an angular takete.

Since then, the experiment has been repeated with people who speak different languages, with two-year-olds, and so on. Researchers also tried changing the words, for example, to buba/kiki. In all cases, the effect was preserved.

Since multi-modal models have become very popular this year, Nearcyan (nearcyan.com/) from Austin decided to see what the CLIP model thinks about these words. In the second row, there are examples of generated images for kiki and buba, in the third — for the forms of "maluma" and "takete."

More details, pictures, and other words are in the original blog post (nearcyan.com/the-bouba-kiki-ef).

I recently wrote about the neural network generation of pixel graphics by Tom White.

Last weekend, I got to play with the code a bit and added a couple of optional features: palette enforcement and an additional loss for smoothing. It turned out unexpectedly well: check out the picture above with several results. You can find more images and a link to google colab are in my Twitter thread (twitter.com/altsoph/status/142).

Top left — the sculpture Trinity by Frank Haase (frank-haase-design.de/galerie/), a translucent cube whose three projections are three different QR-codes. Bottom left — the QR Rubik's Cube with six different messages on different sides (altsoph.com/projects/qrcube/); I once made it as a birthday gift. Top right — my QR code, made using the approach described by Russ Cox in the excellent article QArt Codes (research.swtch.com/qart). Bottom right — the three-layer code invented by Eckart Schadt (eckartschadt.de/Digital-Analog): depending on the distance, the contrast of some pixels changes, and the code is read differently (it works very poorly from the screen, try the printout.)

Also, for the Internet connectivity: my old post on mirror QR codes generation (medium.com/altsoph/double-side).

Tom White (drib.net/), an AI Artist from New Zealand, came up with an idea of how to generate pixel art images with VQGAN+CLIP networks. For the second week now, he is posting the neuro-pixel-art alphabet in this Twitter thread (twitter.com/dribnet/status/142) (he got to the letter W yesterday). I suspect a huge amount of cherry-picking; anyway, he promised to publish a colab soon, so you can experiment on your own.

If you like this, also pay attention to the 8-bit fan art episode of Rick and Morty (youtube.com/watch?v=x9vcTf3_nr), drawn by Australian animator Paul Robinson (en.wikipedia.org/wiki/Paul_Rob). And if not, check out how much of the Hitchhiker’s Guide to the Galaxy you can fit in a single QR Code (mbuffett.com/posts/qr-code-hit), or read our recent article with Max Ryabinin about the cross-lingual neural networks solving Winograd schemas (arxiv.org/abs/2106.12066).

* Game mechanics of one-dimensional chess (docpop.gumroad.com/l/1DChess), via @backtracking channel.
* Announcement of the talk on the creation of a Tibetan typewriter (twitter.com/ContextualAlt/stat) (the talk will be in a week).
* JPEG XL graphic format is almost Turing complete (dbohdan.com/wiki/jpeg-xl) (via Wolfram's Rule 110 automaton).
* Emoticons that are valid javascript code (twitter.com/aemkei/status/1405).
* Japanese Circular Forest Experiment (twistedsifter.com/2019/03/crop).
* Doom Captcha (vivirenremoto.github.io/doomca).

Show thread

As usual, there is not enough time for anything, so here is just another selection of strange/exciting things without any special comments (sorry if I missed some sources):

* Article which tries to describe Escher patterns in mathematical language (ams.org/notices/200304/fea-esc).
* HumBugDB - the paper presents a large-scale acoustic mosquito dataset (openreview.net/forum?id=FNUijt) (submitted for NeurIPS 2021, but it will most likely be rejected).
* Archaeological investigation of the origin of the bubble sort algorithm (users.cs.duke.edu/~ola/bubble/) (the authorship, the origin of the name, etc.).
* Video of restoration of the incredibly cool fractal vise (youtube.com/watch?v=QBeOgGt_oW).
* A DIY project for assembling a VGA card (eater.net/vga) via Dima Kuznetsov.
* Video of the design of a spherical gear (youtube.com/watch?v=AHUv9Zda_4).

The other day, I found the colab with a TPU to run a shiny GPT-J-6B (colab.research.google.com/gith) (this is kinda an open-source version of GPT-3). The env is brewed for 15 minutes in total, but then processing one prompt is fast enough, like, in 10 seconds.

I tried it to generate jokes about mathematicians in a bar, so I share the results with you (almost without cherry-picking):

[An infinite number of mathematicians walk into a bar], and just one gets his research published. Who is that?

[An negative number of mathematicians walk into a bar], and say "I'm actually going to be less than zero".

[A quaternion number of mathematicians walk into a bar],
A pair of discrete logarithms, A prime gap,
A clash of protocols, A geometrical proof,
A product of exiles, A cosmic lie,
A shape-changing goblet, A parallelogram of regular solids,
An obstruction course, A normal subgroup,
An impulse of bits, An irrational number of settings

The boom around VQGAN+CLIP generated images and videos continues to increase. This video I found earlier today on Twitter (twitter.com/NerdyRodent/status), but that more interesting, is a fresh post on the Berkeley ML blog (ml.berkeley.edu/blog/posts/cli) (the whole blog is good, actually). This post formulates an exciting idea that we see the formation of a new demoscene here.

Besides, read a scientific paper on searching for evidence of time travelers on the Internet (arxiv.org/abs/1312.7128) and watch a video from an ancient computer game from Timothy Leary (youtube.com/watch?v=G_BIFeMdsl).

(2/2) They look pretty different but could be simply parameterized by generator codes like `G106R26B4Y44` or `G24K8G2K8`, so it's easy to generate them, and there is already a twitter bot alltartans (twitter.com/alltartans) for it (Fig.C). On the other hand, these patterns are similar to unfinished fractals, so there are attempts to draw hyperbolic tartans (curiousandunusualtartans.com/d) (Fig. C). The square of the Cantor set (en.wikipedia.org/wiki/Cantor_s) is also called Cantor Tartan (and is similar to the Sierpinski carpet (en.wikipedia.org/wiki/Sierpi%C), Fig.D); for some reason, someone is trying to define a calculus (arxiv.org/abs/1712.01347v2) on it.

Also, while writing this post, I discovered a strange carpet sect, Triangle Frenzy (google.com/search?q=Triangle+F).

Show thread

(1/2) In Scotland, there is such a phenomenon as tartans (en.wikipedia.org/wiki/Tartan). These are textile patterns, unique for districts, clans, families, etc. (examples in Fig.A); historically, they play a role similar to the coat of arms; they are used to create kilts, scarves, etc. The first known tartan, Falkirk, dates back to ~250 AD (westcoastkilts.com/kilt-histor), and now there are a lot of them — more than 3000 are currently registered in the official register (tartanregister.gov.uk/).

Today it's time for a "strange robots" rubric:

There was a South Korean company Hankook Mirae Technology; they were making cruel exoskeletons (just like in MechWarrior). There are some documentaries about them (youtube.com/watch?v=3ldJswGpkj), and Bezos once took one of them for a ride (twitter.com/JeffBezos/status/8) just 5 years ago. Then suddenly, it turned out bad: the company owner, in moments of mental anguish, beat the employees, fired at them with a BB gun, forced them to kill chickens, and did other interesting things. In short, last year, he was sentenced to 7 years of jail (koreajoongangdaily.joins.com/2), and now even the company's website is down.

By the way, the design of this exoskeleton was made by Vitaly Bulgarov, a famous industrial designer (he also made projects for Ghost in the Shell, Transformers 4, and other movies). There are a lot of powerful works on his website (vitalybulgarov.com).

Deaths distribution visualization in chess. Source reddit.com/r/dataisbeautiful/c

Let me remind you of the detailed scientific paper Survival in chessland (tom7.org/chess/survival.pdf) on the same topic presented at SIGBOVIK 2019 (sigbovik.org/2019/).

Today I learned that ocean ships leave long-lasting traces behind them, just like a tractor in the mud. They can be seen in infrared satellite images, and scientists on a NASA grant have trained a convolutional network to recognize them (agupubs.onlinelibrary.wiley.co).

Also, here is another network trained to restore low-lighted photos (github.com/cchen156/Learning-t).

Or try a harsh online game where you need to speed type TeX formulas (texnique.xyz/).

Look what a great map of Ireland's lighthouses by Neil Southall (twitter.com/neilcfd1/status/13) (with correct timings and flash patterns from www.irishlights.ie).

Or read the preprint (arxiv.org/abs/1911.04773) of our article about the validation of clustering metrics, which was accepted today for ICML 2021.

A while ago, Pasha Gertman shared a link to a project of John Williamson, the plug-in for Blender, which generates a 3d model from an ASCII diagram of the knot (johnhw.github.io/blender_knots) (left pictures). I once showed it to Borislav, and indeed, we immediately decided to render the Lynch knot. Borislav drew the diagram, and I somehow hastily made the prerender (right pictures).

More about knots:
* a talk about two different mathematical knot notations (web.archive.org/web/2016041716),
* a note about the smallest knot in the world made of 192 atoms by chemists from Manchester (manchester.ac.uk/discover/news),
* an article about how a twenty-year-old Lisa Piccirillo solved the Conway knot problem (quantamagazine.org/graduate-st).

The British artist Elin Thomas (elinthomas.com/) makes artificial Petri dishes of felt and wool.

Also, read the computer mouse standard from Xerox Palo Alto Research Center (bitsavers.trailing-edge.com/pd) (1981) or organize steganography using six invisible Unicode characters (github.com/KuroLabs/stegcloak).

All this usually looks like a game of the mind, not burdened with everyday trifles. Still, the boom of neural networks has brought us the popularity of all sorts of multidimensional embeddings and representations — for words, texts or pictures, and there such dirty things happen regularly. Recently, in one of my tasks, I faced such a thing:

Let's take a 100-dimensional space and choose 42 points in it uniformly randomly from the unit hypercube. Number them in some random fixed order, from 1 to 42. What is the probability that there exists such an axis that our points projected on it will line up in the given order? Answer: more than 99%. If you are interested, you can check it with my python empiric test script (gist.github.com/altsoph/1c8465) (be patient, it takes quite a long time to solve systems of linear inequalities by crossing half-spaces for each pair of points).

Show thread

Another example is Borsuk's conjecture (en.wikipedia.org/wiki/Borsuk%2) about the possibility of splitting an n-dimensional solid with a diameter of 1 into n+1 solids with a diameter less than 1. It is proved for n<=3 and disproved for n>=64. In the middle is tormenting suspense.

Show thread

One popular example: take a square on a plane and inscribe a circle into it. Clearly, the circle covers most of the square's area. Next, take a cube and inscribe a sphere into it. Again, the sphere covers most of the cube's volume. But in the four-dimensional case, the hypersphere covers less than a third of the volume of the hypercube. With a further increase in the number of the dimensions, the ratio of their volumes converges to zero. The Euclidean distance from the center of an n-dimensional cube to any of its 2^n corners grows as sqrt(n), i.e., indefinitely. The main volume of space (i.e., the most of uniform random points) inside such a cube is located at a distance from the center with the mean of sqrt(n/3) and with a decreasing to zero variance. In short, an n-dimensional cube is a weird place, with a bunch of corners and an empty center.

Show thread
Show older
OldBytes Space - Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!