-
Notifications
You must be signed in to change notification settings - Fork 388
add container service for local development #4238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other comments (3)
- src/workerd/server/server.h (105-105) The `enableDocker` field has been added to the `Durable` struct, but there's no corresponding field in the `Ephemeral` struct. If Docker functionality should be available for both durable and ephemeral actors, consider adding it to the `Ephemeral` struct as well for consistency.
-
src/workerd/io/docker-service.h (16-16)
The constructor has an unnecessary semicolon after the function body.
DockerService(kj::String socketPath_): socketPath(kj::mv(socketPath_)) {}
- src/workerd/io/docker-service.c++ (1-1) The copyright year range (2017-2022) should be updated to include the current year since this is a new file.
1 file skipped due to size limits:
- src/workerd/server/server.c++
💡 To request another review, post a new comment with "/windsurf-review".
610cd20
to
46b746b
Compare
46b746b
to
ab04db2
Compare
The generated output of |
811ce9a
to
89a9e98
Compare
src/workerd/server/workerd.capnp
Outdated
docker @5 :DockerOptions; | ||
# Optional configuration options for local docker container interaction. | ||
|
||
struct DockerOptions { | ||
containerName @0 :Text; | ||
# Name of the docker container associated with the current DO namespace. | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is only going to be used here, then this can probably be simplified into a Group rather than struct. But, I'm not the expert on capnp schemas so I'll defer to @kentonv on what's best for the structure here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see a few problems with a group:
- you would have to add an explicit "enabled"
- the field numbers in groups are from the parent struct
- also I wonder if the docker options really belongs here or to a separate map of DO namespaces to Options - I don't think it's required now but it would be easier to migrate.
Maybe for the last point I would move the struct at top level instead of nested in DO, close to DockerConnectionOptions
.
src/workerd/io/container-client.h
Outdated
#include <kj/debug.h> | ||
|
||
namespace workerd::io { | ||
struct RustAsyncStream final: public kj::AsyncIoStream { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
heh... what's that you always nag me about remind me to do ;-) ... some code comments in here would be ideal ;-)
src/workerd/io/container-client.c++
Outdated
|
||
kj::Own<ContainerAsyncStream> createContainerStream( | ||
kj::StringPtr address, kj::StringPtr containerName) { | ||
auto sharedState = kj::heap<ContainerStreamSharedState>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not refcounted?
kj::Rc<T>
+ kj::rc<T>(...)
|
||
struct ContainerOptions { | ||
name @0 :Text; | ||
# Name of the container associated with the current DO namespace. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How dose this work? Each DO instance gets its own container. Is this the name of the container image? This really needs a lot more commentary.
_params: container::GetTcpPortParams, | ||
_results: container::GetTcpPortResults, | ||
) -> Promise<(), capnp::Error> { | ||
Promise::from_future(async move { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to implement a rpc::Container::Port::Server
and return an object of that kind.
That object implements connect
which pumps bytes back and forth between the container tcp port and the incoming bytestream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See example in fake-container-service.c++ internally.
I think we need to back up and write out a document explaining what's going on here. The design here doesn't appear to match any design we've discussed so far and I have no idea what's going on. |
} | ||
|
||
impl container::Server for Server { | ||
fn destroy( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe what happens here is that we send a SIGTERM, wait for 5 min for the container to stop, and if it is not stopped then we send a SIGKILL.
So I think if this were changed to sending a SIGTERM first, then we waited and checked for status and then called remove_container
conditionally it'd be the right logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% positive that this happens on the destroy
RPC command though (versus just when we update)...
@gabivlj - I tracked things to here at least: https://gitlab.cfdata.org/cloudflare/cc/cloudchamber/-/blob/master/go/cmd/cloudchamberd/internal/services/durableobject/binding.go#L362
Does the container.sidecar.Stop
command do the two signals, or will this just immediately SIGKILL in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh... maybe stop_container has the behavior we want? (https://docs.rs/bollard/latest/bollard/struct.Docker.html#method.stop_container) and it includes a timeout to wait before it sigkills (https://docs.rs/bollard/latest/bollard/container/struct.StopContainerOptions.html).
So I think:
- If
destroy
is meant to immediately destory, then we call kill_container - If
destory
does in fact do the timeout behavior, we use stop_container and set the timeout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When signal()
is called, we just relay that signal to the runtime. When destroy()
is called, we send a SIGKILL.
995afc6
to
06e1ed1
Compare
@kentonv Adding a rust crate as a dependency is causing these changes. I don't know if it's possible but anybody knows a way happy to do it. cc @danlapid @mikea @fhanau |
Can you please move all the Rust dependency management into a separate commit? As is my UI is slowing to a crawl and github doesn't seem to want to display most of the files because there's too much stuff in the diff. |
@kentonv Yes, I'll do it. As soon as possible. |
// The stream uses shared state to coordinate between the MessageCallback | ||
// (which receives messages from the container asynchronously) and the | ||
// tryRead() method (which provides those messages to the C++ side). | ||
struct ContainerAsyncStream final: public kj::AsyncIoStream { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I read the comment but I don't understand the purpose of this class. Why do containers need a special kind of stream?
06e1ed1
to
95276c5
Compare
src/workerd/io/container-client.c++
Outdated
auto lockedQueue = messageQueue.lockExclusive(); | ||
for (auto& byte : message) { | ||
lockedQueue->push(byte); | ||
} | ||
|
||
auto lockedWaiter = readWaiter.lockExclusive(); | ||
KJ_IF_SOME(fulfiller, *lockedWaiter) { | ||
fulfiller->fulfill(); | ||
*lockedWaiter = kj::none; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should both locks actually be held simultaneously? Or should each of these be in their own blocks?
} | ||
|
||
size_t min = kj::min(lockedQueue->size(), maxBytes); | ||
KJ_REQUIRE(min > 0, "Should never happen"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For these, use KJ_ASSERT
rather than KJ_REQUIRE
|
||
size_t min = kj::min(lockedQueue->size(), maxBytes); | ||
KJ_REQUIRE(min > 0, "Should never happen"); | ||
memcpy(buffer, &lockedQueue->front(), min); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use kj::ArrayPtr
copyFrom instead for more safety
feb9a8e
to
cdc59de
Compare
811aaf8
to
b28f340
Compare
.collect::<Vec<String>>(); | ||
|
||
let config = ContainerCreateBody { | ||
image: Some(container_name.clone()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks like the image ref is the same as container_name? i don't think this is right, because we will have multiple instances running from the same image.
miniflare will give workerd an image name:tag, which should be used here to create the container from the right image. then the container name can be whatever you want. Something readable for the end user preferably, idk if you have access here to the string the DO instance id is created from, but that would be ideal.
then all the other methods here should use that container name, not the image name that miniflare passed in.
maybe we can rename the config option in workerd.capnp to image_tag for clarity?
use futures::AsyncWrite; | ||
use tokio::sync::mpsc; | ||
|
||
unsafe extern "C" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anonrig let restore the original switch. this wouldn't work on win?
9ef5aa2
to
3b1d3e1
Compare
Uh oh!
There was an error while loading. Please reload this page.