Skip to content

Redesign Watcher initialization #150

Open
@nikneym

Description

@nikneym

Hi, this issue might not sound at first but let me explain how libxev can benefit from this.

Currently, watchers can be created on their own, they can even initialized from a file descriptor (or from an HANDLE/SOCKET on windows) which is useful. This approach also separates the logic for event loops and watchers. Watchers can take a pointer to loop when submitting I/O requests. But sometimes, for some backends, a pointer to loop might be required to register a file descriptor to I/O engine.

My initial reasoning was for IOCP but after I dug deeper, I realized others can benefit from this too. Changes offered here do not add a field to watcher types, only modifies the work of their init function.

io_uring

Future versions of io_uring has support for direct descriptors. Direct descriptors is a concept where file descriptors are owned by io_uring instance, rather than the process. This association allows io_uring worker threads and internal polling to do I/O operations more efficient, rather than sharing file descriptors back and forth.

io_uring supports a handful of utilities to create file descriptors in the ring rather than the process, namely:

  • io_uring_prep_socket_direct
  • io_uring_prep_open
  • io_uring_prep_openat

The earlier create direct descriptors in the ring, we also have utilities for registering file descriptors to ring that were created in the process:

  • io_uring_register_files
  • io_uring_prep_files_update
  • io_uring_register_eventfd

If the loop initializes the file descriptors (or the watchers take a pointer to loop on init), direct descriptors can be presented as implementation detail, rather than separate API.

I/O Completion Ports

IOCP requires associating HANDLEs/SOCKETs via CreateIoCompletionPort to receive notifications. Currently this is done when submitting I/O and requires completion objects to take a pointer to loop (associate_fd is called each submission):

libxev/src/backend/iocp.zig

Lines 551 to 562 in 07bcffa

.read => |*v| action: {
self.associate_fd(completion.handle().?) catch unreachable;
const buffer: []u8 = if (v.buffer == .slice) v.buffer.slice else &v.buffer.array;
break :action if (windows.exp.ReadFile(v.fd, buffer, &completion.overlapped)) |_|
.{
.submitted = {},
}
else |err|
.{
.result = .{ .read = err },
};
},

libxev/src/backend/iocp.zig

Lines 581 to 592 in 07bcffa

.write => |*v| action: {
self.associate_fd(completion.handle().?) catch unreachable;
const buffer: []const u8 = if (v.buffer == .slice) v.buffer.slice else v.buffer.array.array[0..v.buffer.array.len];
break :action if (windows.exp.WriteFile(v.fd, buffer, &completion.overlapped)) |_|
.{
.submitted = {},
}
else |err|
.{
.result = .{ .write = err },
};
},

Hence each completion has a pointer to loop stored:

libxev/src/backend/iocp.zig

Lines 917 to 919 in 07bcffa

/// Loop associated with this completion. HANDLE are required to be associated with an I/O
/// Completion Port to work properly.
loop: ?*const xev.Loop = null,

This change can reduce associate_fd calls to one for each watcher.

Epoll

Epoll suffers the same as IOCP, file descriptors have to be associated with epoll fd via epoll_ctl with EPOLL_CTL_ADD flag. Currently epoll_ctl is called for each submission. This change alone cannot fix the status quo since epoll does not support request based I/O (like io_uring or IOCP) but could create a better ground for it's redesign.

Other Backends

I don't have insights of kqueue or WASI backends so not sure if they can benefit from this change.

In the end, this would make the API a bit similar to libuv and TigerBeetle's I/O but allow us to better design how things work internally.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions