IPv6 Multicast Listener for Elixir

I have yet to find a working example of this, so I am obligated to post one. Especially considering the frustration I and a colleague more intelligent than I had to undergo before we found a working solution.

Please note that you will not get away with using an old version of OTP. IPv6 Multicast joins were broken for a long time.

Here is a simple script that will listen for a single packet, print it out, and exit.

require Logger
address={0xff1e, 0, 0, 0, 0, 0, 0x70, 0x25}
iface=0
{:ok, socket} = :gen_udp.open(8208, [
{:ip, address},
{:add_membership, {address, iface}},
{:debug, true},
:binary,
{:reuseaddr, true},
{:recbuf, 8388608},
])
Logger.info("UDP socket opened successfully #{inspect(socket)}")
{:ok, message} = :gen_udp.recv(socket, 9)
Logger.info("Received message: #{inspect(message)}")

An interface index of 0 works for global multicast addresses. Your kernel should figure it out based on the routing table. For a link-local or unique-local address, you will need to set that to your actual interface index.

NOTE THE ORDER OF THE OPTIONS

If you move :add_membership lower down, you will get a nonsensical eaddrinuse error. Yeah, that took a while to figure out. The other options can be moved around, but add_membership needs to be up towards the top. We assume this is because the options are processed in the order in which they are provided instead of the order of which they should be.

So let's move on to the gen_server implementation to expose the next gotcha.

defmodule Listen do
use GenServer
require Logger
@address {0xff1e, 0, 0, 0, 0, 0, 0x70, 0x25}
@iface 0
#@address {224,0,0,1}
#@iface {0,0,0,0}

def start_link(_) do
GenServer.start_link(__MODULE__, {}, name: __MODULE__)
end

def init(_opts) do
{:ok, socket} = :gen_udp.open(8208, [
{:ip, @address},
{:add_membership, {@address, @iface}},
{:debug, true},
:binary,
{:reuseaddr, true},
{:recbuf, 8388608},
{:active, true}
])
Logger.info("UDP socket opened successfully #{inspect(socket)}")
{:ok, %{socket: socket}}
end
def handle_info(msg, state) do
Logger.info("Received message: #{inspect(msg)} #{inspect(state)}")
{:noreply, state}
end
end
Listen.start_link([])
Process.sleep(:infinity)

Here, you must add {:active, true}, or handle_info() will never be called. Elixir will just leave the packets in the network queue. This is not true for IPv4. I left in the commented out IPv4 options if you want to prove this to yourself.

I would be remiss if I didn't provide you a command to test these.

echo "Elixir sucks" > /dev/udp/ff1e::70:25/8208

Before I leave, I will urge you to hold off on using elixir for important things. It is an immature pet project with poor performance, and you will be pressed to find engineers well versed in it who can maintain it with any efficiency.

Finding the maximum time_t that gmtime() will handle.

This is not another dry technical post. This problem is trivial to solve by even the most junior programmer. This post is to reflect on the current state of LLMs.

Being a trivial problem, I gave it to ChatGPT because I was too lazy to type it out myself, and it was not going into a production system. I just wanted to know the answer. It gave me this code, which made me laugh out loud:

#include <stdio.h>
#include <time.h>
#include <limits.h>

int main() {
    time_t t = 0;
    struct tm *timeinfo;

    // Test from 0 to the maximum value of time_t
    while (1) {
        timeinfo = gmtime(&t);
        if (timeinfo == NULL) {
            printf("gmtime() failed for time_t = %lld\n", (long long)t);
            break;
        }
        t++;
    }

    printf("Maximum time_t for gmtime(): %lld\n", (long long)(t - 1));
    return 0;
}


This code is technically correct, but I knew immediately that me and my machine would be dead by the time this function returned. So I took it and made it return instantly with a binary search:

#include <stdio.h>
#include <time.h>
#include <limits.h>

int main() {
    time_t t = 0, l = 0, u = LLONG_MAX;
    struct tm *timeinfo;

    while (1) {
        if (u - l < 2) {
             break;
        }
        t = (u+l) / 2;
        timeinfo = gmtime(&t);
        if (timeinfo) {
            l = t;
        } else {
            u = t;
        }
    }

    printf("Maximum time_t for gmtime(): %lld\n", (long long)(t));
}

I got my answer. I went back to ChatGPT and asked how long it would take to run the code it gave me on the world's fastest processor. It estimated 292,278,994 years.

But then something funny happened. I asked it to rewrite it's own code to execute in less than 1 second. It spit out a function almost identical to mine, only it was easier to read:

#include <stdio.h>
#include <time.h>
#include <stdint.h>

int main() {
    time_t low = 0;
    time_t high = INT64_MAX;  // Use the maximum possible time_t value (64-bit)
    time_t mid;
    struct tm *timeinfo;

    // Perform binary search to find the max time_t for which gmtime() works
    while (low < high) {
        mid = low + (high - low) / 2;  // Calculate the midpoint

        timeinfo = gmtime(&mid);  // Try to get the time in UTC

        if (timeinfo == NULL) {
            // gmtime failed, so the maximum valid time_t must be less than mid
            high = mid - 1;
        } else {
            // gmtime succeeded, so the maximum valid time_t must be greater than or equal to mid
            low = mid + 1;
        }
    }

    // At the end of the binary search, `high` will hold the maximum valid time_t
    printf("Maximum time_t for gmtime(): %lld\n", (long long)high);
    return 0;
}

 

That's kind of neat. Maybe it will take my job after all. Maybe I don't need coding skills. Maybe I just need to become a "Prompt Engineer".

But seriously, I now believe LLMs have savant syndrome, like the Rain Man if he was high on LSD. Not in the sense that they have amazing abilities or talent, but in the sense that they have various developmental disorders and no common sense.

So far, I have enjoyed coding alongside ChatGPT. Not because it makes me more productive, but it gives me that occasional comedy relief I need to make my job bearable. Working from home, I no longer get to clown around with QA on smoke breaks.

Now I suppose if you have the patience to ask ChatGPT enough of the right questions, you may eventually get a reasonable code snippet, but that may require you to already know enough of the answer to call out its BS. If you trust anything that comes out of this thing, you will be in for a ride.

I have gotten some mileage out of it though. It has been great as a learning tool. It saves me wading through search results to find simple answers. It rarely has the right fixes for the error messages I feed it, but it often points me in a good direction. If I know nothing about something, ChatGPT has consistently been a great place to start. It gives me all the truths and lies that I used to get from wikipedia, but in a faster and more entertaining way.