Realtime Servers

Supriya’s Server provides a handle to a scsynth process, allowing you to control the process’s lifecycle, interact with the entities it governs, and query its state.

Lifecycle

Instantiate a server with:

>>> server = supriya.Server()

Instantiated servers are initially offline:

>>> server
<Server OFFLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>

To bring an offline server online, boot the server:

>>> server.boot()
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>

Quit a running server:

>>> server.quit()
<Server OFFLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>

Booting without any additional options will use default settings for the scsynth server process, e.g. listening on the IP address 127.0.0.1 and port 57110, and will automatically attempt to detect the location of the scsynth binary via supriya.scsynth.find().

You can override the IP address or port via keyword arguments:

>>> server.boot(ip_address="0.0.0.0", port=56666)
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 56666]>

Caution

Attempting to boot a server on a port where another server is already running will result in an error:

>>> server_one = supriya.Server()
>>> server_two = supriya.Server()
>>> server_one.boot()
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>
>>> server_two.boot()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/runner/work/supriya/supriya/supriya/contexts/realtime.py", line 603, in boot
    raise ServerCannotBoot
supriya.exceptions.ServerCannotBoot

Use find_free_port() to grab a random unused port to successfully boot:

>>> server_two.boot(port=supriya.osc.find_free_port())
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 53033]>

You can also explicitly select the server binary via the executable keyword:

>>> server.boot(executable="scsynth")
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 56666]>

The executable keyword allows you to boot with supernova if you have it available:

>>> server.boot(executable="supernova")
<Server ONLINE [/usr/local/bin/supernova -R 0 -l 1 -u 56666]>

Boot options

scsynth can be booted with a wide variety of command-line arguments, which Supriya models via an Options class:

>>> supriya.Options()
Options(
    audio_bus_channel_count=1024,
    block_size=64,
    buffer_count=1024,
    control_bus_channel_count=16384,
    executable=None,
    hardware_buffer_size=None,
    initial_node_id=1000,
    input_bus_channel_count=8,
    input_device=None,
    input_stream_mask='',
    ip_address='127.0.0.1',
    load_synthdefs=True,
    maximum_logins=1,
    maximum_node_count=1024,
    maximum_synthdef_count=1024,
    memory_locking=False,
    memory_size=8192,
    output_bus_channel_count=8,
    output_device=None,
    output_stream_mask='',
    password=None,
    port=57110,
    protocol='udp',
    random_number_generator_count=64,
    realtime=True,
    remote_control_volume=False,
    restricted_path=None,
    sample_rate=None,
    threads=None,
    ugen_plugins_path=None,
    verbosity=0,
    wire_buffer_count=64,
    zero_configuration=False,
)

Pass any of the named options found in Options as keyword arguments when booting:

>>> server.boot(input_bus_channel_count=2, output_bus_channel_count=2)
<Server ONLINE [/usr/local/bin/supernova -R 0 -i 2 -l 1 -o 2 -u 56666]>

Multiple clients

SuperCollider support multiple users interacting with a single server simultaneously. One user boots the server and governs the underlying server process, and the remaining users simply connect to it.

Make sure that the server is booting with maximum_logins set to the max number of users you expect to log into the server at once, because the default login count is 1:

>>> server_one = supriya.Server().boot(maximum_logins=2)

Connect to the existing server:

>>> server_two = supriya.Server().connect(
...     ip_address=server_one.options.ip_address,
...     port=server_one.options.port,
... )

Each connected user has their own client ID and default group:

>>> server_one.client_id
0
>>> server_two.client_id
1
>>> print(server_one.query_tree())
NODE TREE 0 group
    1 group
    2 group

Note that server_one is owned, while server_two isn’t:

>>> server_one.is_owner
True
>>> server_two.is_owner
False

Supriya provides some very limited guard-rails to prevent server shutdown by non-owners, e.g. a force boolean flag which non-owners can set to True if they really want to quit the server. Without force, quitting a non-owned server will error:

>>> server_two.quit()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/runner/work/supriya/supriya/supriya/contexts/realtime.py", line 894, in quit
    raise UnownedServerShutdown(
supriya.exceptions.UnownedServerShutdown: Cannot quit unowned server without force flag.

Finally, disconnect:

>>> server_two.disconnect()
<Server OFFLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>

Disconnecting won’t terminate the server. It continues to run from wherever server_one was originally booted.

Inspection

Server provides a number of methods and properties for inspecting its state.

>>> server = supriya.Server().boot()

Inspect the “status” of audio processing:

>>> server.status
StatusInfo(actual_sample_rate=44112.817680182205, average_cpu_usage=0.057836733758449554, group_count=2, peak_cpu_usage=0.3044038712978363, synth_count=0, synthdef_count=0, target_sample_rate=44100.0, ugen_count=0)

Hint

Server status is a great way of tracking scsynth’s CPU usage.

Let’s add a SynthDef and a synth - explained soon - to increase the complexity of the status output:

>>> with server.at():
...     with server.add_synthdefs(supriya.default):
...         synth = server.add_synth(supriya.default)
... 
>>> server.status
StatusInfo(actual_sample_rate=44112.70441181757, average_cpu_usage=0.1426311731338501, group_count=2, peak_cpu_usage=0.3044038712978363, synth_count=1, synthdef_count=33, target_sample_rate=44100.0, ugen_count=20)

Note that synth_count, synthdef_count and ugen_count have gone up after adding the synth to our server. We’ll discuss these concepts in following sections.

Querying the node tree with query() will return a “query tree” representation, which you can print to generate output similar to SuperCollider’s s.queryAllNodes server method:

>>> server.query_tree()
QueryTreeGroup(node_id=0, annotation=None, children=[QueryTreeGroup(node_id=1, annotation=None, children=[QueryTreeSynth(node_id=1000, annotation=None, synthdef_name='default', controls=[QueryTreeControl(name_or_index='amplitude', value=0.10000000149011612), QueryTreeControl(name_or_index='frequency', value=440.0), QueryTreeControl(name_or_index='gate', value=1.0), QueryTreeControl(name_or_index='pan', value=0.5), QueryTreeControl(name_or_index='out', value=0.0)])])])
>>> print(_)
NODE TREE 0 group
    1 group
        1000 default
            amplitude: 0.1, frequency: 440.0, gate: 1.0, pan: 0.5, out: 0.0

Access the server’s root node and default group:

>>> server.root_node
RootNode(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=0, parallel=False)
>>> server.default_group
Group(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=1, parallel=False)

And access the input and output audio bus groups, which represent microphone inputs and speaker outputs:

>>> server.audio_input_bus_group
BusGroup(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=8, calculation_rate=CalculationRate.AUDIO, count=8)
>>> server.audio_output_bus_group
BusGroup(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=0, calculation_rate=CalculationRate.AUDIO, count=8)

Interaction

The server provides a variety of methods for interacting with it and modifying its state.

You can send OSC messages via the send() method, either as explicit OscMessage or OscBundle objects, or as Requestable objects:

>>> from supriya.osc import OscMessage
>>> server.send(OscMessage("/g_new", 1000, 0, 1))

Many interactions with scsynth don’t take effect immediately. In fact, none of them really do, because the server behaves asynchronously. For operations with significant delay, e.g. sending multiple SynthDefs or reading/writing buffers from/to disk, use sync() to block until all previously initiated operations complete:

>>> server.sync()
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>

Note

See Open Sound Control for more information about OSC communication with the server, including OSC callbacks.

The server provides methods for allocating nodes (groups and synths), buffers and buses, all of which are discussed in the sections following this one:

>>> server.add_group()
Group(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=1000, parallel=False)
>>> server.add_synth(supriya.default, amplitude=0.25, frequency=441.3)
Synth(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=1001, synthdef=<SynthDef: default>)
>>> server.add_buffer(channel_count=1, frame_count=512)
Buffer(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=0, completion=Completion(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, moment=Moment(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, seconds=None, closed=True, requests=[(AllocateBuffer(buffer_id=0, frame_count=512, channel_count=1, on_completion=None), ...)]), requests=[]))
>>> server.add_buffer_group(count=8, channel_count=2, frame_count=1024)
BufferGroup(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=1, count=8)
>>> server.add_bus()
Bus(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=0, calculation_rate=CalculationRate.CONTROL)
>>> server.add_bus_group(count=2, calculation_rate="audio")
BusGroup(context=<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>, id_=16, calculation_rate=CalculationRate.AUDIO, count=2)
>>> print(server.query_tree())
NODE TREE 0 group
    1 group
        1001 default
            amplitude: 0.25, frequency: 441.299988, gate: 1.0, pan: 0.5, out: 0.0
        1000 group

Resetting

Supriya supports resetting the state of the server, similar to SuperCollider’s CmdPeriod:

>>> server.reset()
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>
>>> print(server.query_tree())
NODE TREE 0 group
    1 group

You can also just reboot the server, completely resetting all nodes, buses, buffers and SynthDefs:

>>> server.reboot()
<Server ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>

Async

Supriya supports asyncio event loops via AsyncServer, which provides async variants of many Server’s methods. All lifecycle methods (booting, quitting) are async, and all getter and query methods are async as well.

>>> import asyncio
>>> async def main():
...     # Instantiate an async server
...     print(async_server := supriya.AsyncServer())
...     # Boot it on an arbitrary open port
...     print(await async_server.boot(port=supriya.osc.find_free_port()))
...     # Send an OSC message to the async server (doesn't require await!)
...     async_server.send(["/g_new", 1000, 0, 1])
...     # Query the async server's node tree
...     print(await async_server.query_tree())
...     # Quit the async server
...     print(await async_server.quit())
... 
>>> asyncio.run(main())
<AsyncServer OFFLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 57110]>
<AsyncServer ONLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 42016]>
NODE TREE 0 group
    1 group
        1000 group
<AsyncServer OFFLINE [/usr/local/bin/scsynth -R 0 -l 1 -u 42016]>

Use AsyncServer with AsyncClock to integrate with eventloop-driven libraries like aiohttp, python-prompt-toolkit and pymonome.

Lower level APIs

You can kill all running scsynth processes via supriya.scsynth.kill():

>>> supriya.scsynth.kill()

Get access to the server’s underlying process management subsystem via process_protocol:

>>> server.process_protocol
<supriya.scsynth.SyncProcessProtocol object at 0x7f2d9e795610>

Get access to the server’s underlying OSC subsystem via osc_protocol:

>>> server.osc_protocol
<supriya.osc.threaded.ThreadedOscProtocol object at 0x7f2d9e7954f0>

Note

Server manages its scsynth subprocess and OSC communication via SyncProcessProtocol and ThreadedOscProtocol objects while the AsyncServer discussed later uses AsyncProcessProtocol and AsyncOscProtocol objects.