17 Commits

Author SHA1 Message Date
06a9dd10e1 fix(deps): update tokio-prost monorepo to v0.13.5
Some checks failed
continuous-integration/drone/push Build is failing
continuous-integration/drone/pr Build is failing
2025-02-13 01:11:41 +00:00
55befef95b chore(release): v0.1.0 (#17)
All checks were successful
continuous-integration/drone/tag Build is passing
continuous-integration/drone/push Build is passing
chore(release): 0.1.0

Co-authored-by: cuddle-please <bot@cuddle.sh>
Reviewed-on: https://git.front.kjuulh.io/kjuulh/churn-v2/pulls/17
2025-01-11 15:26:44 +01:00
53cc689dc4 docs: update readme
All checks were successful
continuous-integration/drone/push Build is passing
next up is differentiating the different agents, such that we can execute commands from the cli to for example update dependencies on all machines, restart machines etc.
2025-01-11 15:22:38 +01:00
1c20383de6 chore: update final repo
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 15:11:30 +01:00
53c15a653f feat: add cuddle please
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 15:10:59 +01:00
9c5cb6667e chore: update lock"
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 15:09:23 +01:00
b0c40196b6 docs: add installation docs
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 14:11:02 +01:00
a28a5ca6ee fix: use actual names for files
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 13:08:04 +01:00
ea6bfc9c04 feat: enable churn update service
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 13:04:54 +01:00
844f8519d5 feat: add updater to install script 2025-01-11 13:03:11 +01:00
1508fbb2bf feat: add updater to install script
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-11 13:02:57 +01:00
ef6ae3f2b1 chore: update default schedule
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-10 21:46:57 +01:00
8923c60d9e feat: add http client
All checks were successful
continuous-integration/drone/push Build is passing
2025-01-10 21:42:35 +01:00
efec76d28c feat: run more often
All checks were successful
continuous-integration/drone/push Build is passing
Signed-off-by: kjuulh <contact@kjuulh.io>
2025-01-05 20:50:49 +01:00
03e23c7d9d feat: enable checking if it should actually run
All checks were successful
continuous-integration/drone/push Build is passing
Signed-off-by: kjuulh <contact@kjuulh.io>
2025-01-04 01:52:05 +01:00
83294306a4 feat: enable having get variable from local setup
All checks were successful
continuous-integration/drone/push Build is passing
Signed-off-by: kjuulh <contact@kjuulh.io>
2025-01-04 01:28:32 +01:00
ceaad75057 feat: inherit output as well
All checks were successful
continuous-integration/drone/push Build is passing
Signed-off-by: kjuulh <contact@kjuulh.io>
2025-01-04 00:35:18 +01:00
15 changed files with 715 additions and 186 deletions

165
CHANGELOG.md Normal file
View File

@@ -0,0 +1,165 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.1.0] - 2025-01-11
### Added
- add cuddle please
- enable churn update service
- add updater to install script
- add updater to install script
- add http client
- run more often
- enable checking if it should actually run
- enable having get variable from local setup
- inherit output as well
- allow process from external code
- add inherit
- add default no labels
- warn all targets
- update with web assembly components
- add labels to config
- add abstraction around task
- enable webpki roots
- add short connect timeout
- more error logging
- stop the service if running
- setup stream logging
- update script with warn
- disable force again
- make curl silent"
- force update
- use public prod
- run as root
- agent is already setup
- allow errors
- some more debugging
- some more debugging
- stderr to stdout as well
- this should work
- when config has already been setup
- add agent start as well
- update with agent setup
- add install script
- add comments
- use actual internal
- reqwest as native build
- use internal
- add external service host
- add grpc host
- add external vars
- add grpc and env
- add queue
- add common queue
- add discovery
- add tonic
- added tonic
- added longer timer
- fix error message
- add agent
- add churn v2
- initial v2 commit
- reset
- update
- update
- update stuff
- update
- with drone
- with agent db
- with sled db and capnp
- with sled db
- with basic changelog
- with basic package
- with publish
- with monitoring
- with monitor
- with extra churning repl thingy
- with enroll
- add initial churn
- add simple health check
### Docs
- update readme
next up is differentiating the different agents, such that we can execute commands from the cli to for example update dependencies on all machines, restart machines etc.
- add installation docs
- add notes
### Fixed
- use actual names for files
- *(deps)* update rust crate serde to v1.0.217
- *(deps)* update rust crate serde_json to v1.0.134
- *(deps)* update all dependencies to v28
- *(deps)* update rust crate nodrift to 0.3.0
- *(deps)* update rust crate serde to v1.0.216
- *(deps)* update tokio-prost monorepo to v0.13.4
- *(deps)* update rust crate tokio-util to v0.7.13
- *(deps)* update rust crate bytes to v1.9.0
- *(deps)* update rust crate tower-http to 0.6.0
- *(deps)* update all dependencies
- *(deps)* update rust crate capnp to 0.19.5
- *(deps)* update rust crate capnp to 0.19.4
### Other
- update final repo
- update lock"
- update default schedule
- *(deps)* update rust crate anyhow to v1.0.95
- *(deps)* update rust crate clap to v4.5.23
- *(deps)* update all dependencies
- *(deps)* update rust crate tracing-subscriber to v0.3.19
- *(deps)* update rust crate tracing to v0.1.41
- *(deps)* update rust crate serde to v1.0.215
- *(deps)* update rust crate serde to v1.0.214
- *(deps)* update rust crate serde to v1.0.213
- *(deps)* update rust crate serde to v1.0.210
- *(deps)* update rust crate serde to v1.0.209
- *(deps)* update rust crate serde_json to v1.0.126
- *(deps)* update all dependencies
- *(deps)* update rust crate serde to v1.0.208
- *(deps)* update all dependencies
- *(deps)* update rust crate serde to v1.0.203
- *(deps)* update rust crate anyhow to 1.0.86
- *(deps)* update rust crate anyhow to 1.0.85
- *(deps)* update rust crate anyhow to 1.0.84
- *(deps)* update rust crate itertools to 0.13.0
- *(deps)* update rust crate anyhow to 1.0.83
- *(deps)* update rust crate reqwest to 0.12.4
- *(deps)* update rust crate chrono to 0.4.38
- *(deps)* update rust crate anyhow to 1.0.82
- Merge pull request 'chore(release): v0.1.0' (#4) from cuddle-please/release into main
Reviewed-on: https://git.front.kjuulh.io/kjuulh/churn/pulls/4
- *(release)* 0.1.0
- *(test)* test commit
- *(test)* test commit
- *(test)* test commit
- *(test)* test commit
- Merge pull request 'chore(deps): update all dependencies' (#2) from renovate/all into main
Reviewed-on: https://git.front.kjuulh.io/kjuulh/churn/pulls/2
- *(deps)* update all dependencies
- change to byte slice
- fmt
- fmt
- Add renovate.json
- Release churn-server v0.1.0
- Release churn-agent v0.1.0
- Release churn v0.1.0
- Release churn v0.1.0
- Release churn-domain v0.1.0, churn v0.1.0
- with changelog
- Release churn-domain v0.1.0, churn v0.1.0

502
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -3,7 +3,6 @@ members = ["crates/*"]
resolver = "2"
[workspace.dependencies]
churn = { path = "crates/churn" }
anyhow = { version = "1" }
tokio = { version = "1", features = ["full"] }
@@ -12,3 +11,6 @@ tracing-subscriber = { version = "0.3.18" }
clap = { version = "4", features = ["derive", "env"] }
dotenv = { version = "0.15" }
axum = { version = "0.7" }
[workspace.package]
version = "0.1.0"

View File

@@ -1 +1,27 @@
# churn
## Installation
To install churn, you need first of all a server and agents.
Servers can be run via. docker.
```shell
docker run docker.io/kjuulh/churn-v2:latest
```
To install an agent run the following script
```shell
curl https://git.front.kjuulh.io/kjuulh/churn-v2/raw/branch/main/install.sh | bash
```
configure `~/.local/share/io.kjuulh.churn-agent/churn-agent.toml` use an editor of choice. Churn agent will generate a randomish name for the specific agent, consider giving it something more semantically meaningful to you
## CLI (TBD)
Using the churn cli allows sending specific commands to a set of agents
```
```

View File

@@ -37,3 +37,4 @@ reqwest = { version = "0.12.9", default-features = false, features = [
serde_json = "1.0.133"
wasmtime = "28.0.0"
wasmtime-wasi = "28.0.0"
petname = "2.0.2"

View File

@@ -18,6 +18,7 @@ impl Plan {
Ok(vec![
AptTask::new().into_task(),
PluginTask::new("alloy@0.1.0", self.store.clone()).into_task(),
PluginTask::new("dev_packages@0.1.0", self.store.clone()).into_task(),
])
}
}

View File

@@ -44,7 +44,7 @@ impl State {
let config = AgentConfig::new().await?;
let discovery = DiscoveryClient::new(&config.discovery).discover().await?;
let grpc = GrpcClient::new(&discovery.process_host);
let plugin_store = PluginStore::new()?;
let plugin_store = PluginStore::new(config.clone())?;
let scheduled_tasks = ScheduledTasks::new(plugin_store.clone());
let scheduler = Scheduler::new(scheduled_tasks);
let queue = AgentQueue::new(scheduler);

View File

@@ -8,6 +8,8 @@ use uuid::Uuid;
pub struct AgentConfig {
pub agent_id: String,
pub discovery: String,
pub labels: BTreeMap<String, String>,
}
impl AgentConfig {
@@ -17,6 +19,7 @@ impl AgentConfig {
Ok(Self {
agent_id: config.agent_id,
discovery: config.discovery,
labels: config.labels.unwrap_or_default(),
})
}
}

View File

@@ -8,18 +8,28 @@ use wasmtime::component::*;
use wasmtime::{Config, Engine, Store};
use wasmtime_wasi::{DirPerms, FilePerms, WasiCtx, WasiCtxBuilder, WasiView};
use super::config::AgentConfig;
wasmtime::component::bindgen!({
path: "wit/world.wit",
//world: "churn",
async: true,
with: {
"component:churn-tasks/process/process": CustomProcess
"component:churn-tasks/process/process": CustomProcess,
"component:churn-tasks/http/client": http::HttpClient
}
});
#[derive(Default)]
pub struct CustomProcess {}
mod http;
pub struct CustomProcess {
agent_config: AgentConfig,
}
impl CustomProcess {
pub fn new(agent_config: AgentConfig) -> Self {
Self { agent_config }
}
pub fn run(&self, args: Vec<String>) -> String {
tracing::info!("calling function");
@@ -42,6 +52,10 @@ impl CustomProcess {
}
}
}
pub fn get_label(&self, label_key: &str) -> Option<String> {
self.agent_config.labels.get(label_key).cloned()
}
}
#[derive(Clone)]
@@ -50,9 +64,9 @@ pub struct PluginStore {
}
impl PluginStore {
pub fn new() -> anyhow::Result<Self> {
pub fn new(config: AgentConfig) -> anyhow::Result<Self> {
Ok(Self {
inner: Arc::new(Mutex::new(InnerPluginStore::new()?)),
inner: Arc::new(Mutex::new(InnerPluginStore::new(config)?)),
})
}
@@ -63,6 +77,10 @@ impl PluginStore {
pub async fn execute(&self, plugin: &str) -> anyhow::Result<()> {
let mut inner = self.inner.lock().await;
// FIXME: hack to avoid memory leak issues from instantiating plugins
*inner = InnerPluginStore::new(inner.agent_config.clone())?;
inner.execute(plugin).await
}
}
@@ -71,10 +89,11 @@ pub struct InnerPluginStore {
store: wasmtime::Store<ServerWasiView>,
linker: wasmtime::component::Linker<ServerWasiView>,
engine: wasmtime::Engine,
agent_config: AgentConfig,
}
impl InnerPluginStore {
pub fn new() -> anyhow::Result<Self> {
pub fn new(agent_config: AgentConfig) -> anyhow::Result<Self> {
let mut config = Config::default();
config.wasm_component_model(true);
config.async_support(true);
@@ -89,13 +108,18 @@ impl InnerPluginStore {
|state: &mut ServerWasiView| state,
)?;
let wasi_view = ServerWasiView::new();
component::churn_tasks::http::add_to_linker(&mut linker, |state: &mut ServerWasiView| {
state
})?;
let wasi_view = ServerWasiView::new(agent_config.clone());
let store = Store::new(&engine, wasi_view);
Ok(Self {
store,
linker,
engine,
agent_config,
})
}
@@ -112,11 +136,23 @@ impl InnerPluginStore {
pub async fn execute(&mut self, plugin: &str) -> anyhow::Result<()> {
let plugin = self.ensure_plugin(plugin).await?;
plugin
self.store.gc_async().await;
if plugin
.interface0
.call_execute(&mut self.store)
.call_should_run(&mut self.store)
.await
.context("Failed to call add function")
.context("Failed to call should run")?
{
tracing::info!("job was marked as required to run");
return plugin
.interface0
.call_execute(&mut self.store)
.await
.context("Failed to call add function");
}
Ok(())
}
async fn ensure_plugin(&mut self, plugin: &str) -> anyhow::Result<Churn> {
@@ -148,6 +184,12 @@ impl InnerPluginStore {
let req = reqwest::get(format!("https://api-minio.front.kjuulh.io/churn-registry/{plugin_name}/{plugin_version}/{plugin_name}.wasm")).await.context("failed to get plugin from registry")?;
let mut stream = req.bytes_stream();
tracing::info!(
plugin_name = plugin_name,
plugin_path = plugin_path.display().to_string(),
"writing plugin to file"
);
let mut file = tokio::fs::File::create(&plugin_path).await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
@@ -177,15 +219,19 @@ struct ServerWasiView {
table: ResourceTable,
ctx: WasiCtx,
processes: ResourceTable,
clients: ResourceTable,
agent_config: AgentConfig,
}
impl ServerWasiView {
fn new() -> Self {
fn new(agent_config: AgentConfig) -> Self {
let table = ResourceTable::new();
let ctx = WasiCtxBuilder::new()
.inherit_stdio()
.inherit_stdout()
.inherit_env()
.inherit_stderr()
.inherit_network()
.preopened_dir("/", "/", DirPerms::all(), FilePerms::all())
.expect("to be able to open root")
@@ -195,6 +241,8 @@ impl ServerWasiView {
table,
ctx,
processes: ResourceTable::default(),
clients: ResourceTable::default(),
agent_config,
}
}
}
@@ -216,7 +264,9 @@ impl HostProcess for ServerWasiView {
async fn new(
&mut self,
) -> wasmtime::component::Resource<component::churn_tasks::process::Process> {
self.processes.push(CustomProcess::default()).unwrap()
self.processes
.push(CustomProcess::new(self.agent_config.clone()))
.unwrap()
}
async fn run_process(
@@ -228,6 +278,15 @@ impl HostProcess for ServerWasiView {
process.run(inputs)
}
async fn get_variable(
&mut self,
self_: wasmtime::component::Resource<component::churn_tasks::process::Process>,
key: wasmtime::component::__internal::String,
) -> String {
let process = self.processes.get(&self_).unwrap();
process.get_label(&key).unwrap()
}
async fn drop(
&mut self,
rep: wasmtime::component::Resource<component::churn_tasks::process::Process>,
@@ -237,3 +296,33 @@ impl HostProcess for ServerWasiView {
Ok(())
}
}
impl component::churn_tasks::http::Host for ServerWasiView {}
#[async_trait::async_trait]
impl component::churn_tasks::http::HostClient for ServerWasiView {
async fn new(&mut self) -> wasmtime::component::Resource<component::churn_tasks::http::Client> {
self.clients.push(http::HttpClient::new()).unwrap()
}
async fn get(
&mut self,
self_: wasmtime::component::Resource<component::churn_tasks::http::Client>,
url: wasmtime::component::__internal::String,
) -> Vec<u8> {
let process = self.clients.get(&self_).unwrap();
process
.get(&url)
.await
.expect("to be able to make http call")
}
async fn drop(
&mut self,
rep: wasmtime::component::Resource<component::churn_tasks::http::Client>,
) -> wasmtime::Result<()> {
self.clients.delete(rep)?;
Ok(())
}
}

View File

@@ -0,0 +1,12 @@
pub struct HttpClient {}
impl HttpClient {
pub fn new() -> Self {
Self {}
}
pub async fn get(&self, url: &str) -> anyhow::Result<Vec<u8>> {
let bytes = reqwest::get(url).await?.bytes().await?;
Ok(bytes.into())
}
}

View File

@@ -30,8 +30,10 @@ impl notmad::Component for AgentRefresh {
&self,
cancellation_token: tokio_util::sync::CancellationToken,
) -> Result<(), notmad::MadError> {
// let cancel =
// nodrift::schedule_drifter(std::time::Duration::from_secs(60 * 10), self.clone());
let cancel =
nodrift::schedule_drifter(std::time::Duration::from_secs(60 * 10), self.clone());
nodrift::schedule_drifter(std::time::Duration::from_secs(60 * 5), self.clone());
tokio::select! {
_ = cancel.cancelled() => {},
_ = cancellation_token.cancelled() => {

View File

@@ -31,6 +31,13 @@ pub async fn execute() -> anyhow::Result<()> {
setup_labels.insert(k, v);
}
if !setup_labels.contains_key("node_name") {
setup_labels.insert(
"node_name".into(),
petname::petname(2, "-").expect("to be able to generate a valid petname"),
);
}
agent::setup_config(discovery, force, setup_labels).await?;
tracing::info!("wrote default agent config");
}

View File

@@ -4,6 +4,14 @@ interface process {
resource process {
constructor();
run-process: func(inputs: list<string>) -> string;
get-variable: func(key: string) -> string;
}
}
interface http {
resource client {
constructor();
get: func(url: string) -> list<u8>;
}
}
@@ -16,4 +24,5 @@ interface task {
world churn {
export task;
import process;
import http;
}

View File

@@ -14,6 +14,16 @@ vars:
- internal: "true"
- internal_grpc: "true"
please:
project:
owner: kjuulh
repository: churn-v2
branch: main
settings:
api_url: https://git.front.kjuulh.io
actions:
rust:
cuddle/clusters:
dev:
env:

View File

@@ -8,15 +8,23 @@ APP_VERSION="latest" # or specify a version
S3_BUCKET="rust-artifacts"
BINARY_NAME="churn"
SERVICE_NAME="${APP_NAME}.service"
SERVICE_UPDATE_NAME="${APP_NAME}-update.service"
TIMER_UPDATE_NAME="${APP_NAME}-update.timer"
INSTALL_DIR="/usr/local/bin"
CONFIG_DIR="/etc/${APP_NAME}"
CHURN_DISCOVERY="https://churn.prod.kjuulh.app"
LOG="/var/log/churn-install.log"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
exec > >(tee -i ${LOG})
exec 2>&1
echo "Starting churn install $(date)"
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo -e "${RED}Please run as root${NC}"
@@ -75,12 +83,46 @@ Environment=RUST_LOG=h2=warn,hyper=warn,churn=debug,warn
WantedBy=multi-user.target
EOF
echo "Creating churn update service..."
cat > "/etc/systemd/system/${SERVICE_UPDATE_NAME}" <<EOF
[Unit]
Description=Daily Churn Update Service
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/bin/bash -c 'curl -s https://git.front.kjuulh.io/kjuulh/churn-v2/raw/branch/main/install.sh | bash'
User=root
[Install]
WantedBy=multi-user.target
EOF
cat > "/etc/systemd/system/${TIMER_UPDATE_NAME}" <<EOF
[Unit]
Description=Run Churn Update Daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
EOF
# Reload systemd and enable service
echo "Configuring systemd service..."
systemctl daemon-reload
systemctl enable "${SERVICE_NAME}"
systemctl start "${SERVICE_NAME}"
systemctl enable "${SERVICE_UPDATE_NAME}"
systemctl enable "${TIMER_UPDATE_NAME}"
systemctl start "${TIMER_UPDATE_NAME}"
# Check service status
if systemctl is-active --quiet "${SERVICE_NAME}"; then
echo -e "${GREEN}Installation successful! ${APP_NAME} is running.${NC}"