Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: remove workers from the proxy configuration file #1018

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

### Changes

- Removed workers list from the proxy configuration file (#1018).
- Added health check endpoints to the prover service (#1006).
- Implemented serialization for `AccountHeader` (#996).
- Updated Pingora crates to 0.4 and added polling time to the configuration file (#997).
Expand Down
22 changes: 5 additions & 17 deletions bin/tx-prover/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ First, you need to create a configuration file for the proxy with:
miden-tx-prover init
```

This will create the `miden-tx-prover.toml` file in your current directory. This file will hold the configuration for the proxy. You can modify the configuration by changing the host and ports of the services, and add workers. An example of a valid configuration is:
This will create the `miden-tx-prover.toml` file in your current directory. This file will hold the configuration for the proxy. You can modify the configuration by changing the host and ports of the services, the maximum size of the queue, among other options. An example configuration is:

```toml
# Host of the proxy server
Expand All @@ -61,27 +61,17 @@ max_retries_per_request = 1
max_req_per_sec = 5
# Interval to check the health of the workers
health_check_interval_secs = 1

[[workers]]
host = "0.0.0.0"
port = 8083

[[workers]]
host = "0.0.0.0"
port = 8084
```

To add more workers, you will need to add more items with the `[[workers]]` tags.

Then, to start the proxy service, you will need to run:

```bash
miden-tx-prover start-proxy
miden-tx-prover start-proxy [worker1] [worker2] ... [workerN]
```

This command will start the proxy using the workers defined in the configuration file to send transaction witness to prove.
This command will start the proxy using the workers passed as arguments. The workers should be in the format `host:port`. If no workers are passed, the proxy will start without any workers and will not be able to handle any requests.
SantiagoPittella marked this conversation as resolved.
Show resolved Hide resolved

At the moment, when a worker added to the proxy stops working and can not connect to it for a request, the connection is marked as retriable meaning that the proxy will try reaching the following worker in a round-robin fashion. The amount of retries is configurable changing the `max_retries_per_request` value in the configuration file.
At the moment, when a worker added to the proxy stops working and can not connect to it for a request, the connection is marked as retriable meaning that the proxy will try reaching another worker. The amount of retries is configurable changing the `max_retries_per_request` value in the configuration file.
bobbinth marked this conversation as resolved.
Show resolved Hide resolved

## Updating workers on a running proxy

Expand All @@ -100,15 +90,13 @@ miden-tx-prover add-workers 0.0.0.0:8085 200.58.70.4:50051
miden-tx-prover remove-workers 158.12.12.3:8080 122.122.6.6:50051
```

This changes will be persisted to the configuration file.

Note that, in order to update the workers, the proxy must be running in the same computer as the command is being executed because it will check if the client address is localhost to avoid any security issues.

### Health check

The worker service implements the [gRPC Health Check](https://grpc.io/docs/guides/health-checking/) standard, and includes the methods described in this [official proto file](https://github.com/grpc/grpc-proto/blob/master/grpc/health/v1/health.proto).

The proxy service uses this health check to determine if a worker is available to receive requests. If a worker is not available, it will be removed from the set of workers that the proxy can use to send requests, and will persist this change in the configuration file.
The proxy service uses this health check to determine if a worker is available to receive requests. If a worker is not available, it will be removed from the set of workers that the proxy can use to send requests.

## Logging

Expand Down
59 changes: 1 addition & 58 deletions bin/tx-prover/src/commands/mod.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
use std::{fs::File, io::Write};

use clap::Parser;
use figment::{
providers::{Format, Toml},
Expand All @@ -9,7 +7,6 @@ use init::Init;
use miden_tx_prover::PROVER_SERVICE_CONFIG_FILE_NAME;
use proxy::StartProxy;
use serde::{Deserialize, Serialize};
use tracing::debug;
use update_workers::{AddWorkers, RemoveWorkers, UpdateWorkers};
use worker::StartWorker;

Expand All @@ -24,8 +21,6 @@ pub mod worker;
/// It allows manual modification of the configuration file.
#[derive(Serialize, Deserialize)]
pub struct ProxyConfig {
/// List of workers used by the proxy.
pub workers: Vec<WorkerConfig>,
/// Host of the proxy.
pub host: String,
/// Port of the proxy.
Expand All @@ -49,7 +44,6 @@ pub struct ProxyConfig {
impl Default for ProxyConfig {
fn default() -> Self {
Self {
workers: vec![WorkerConfig::new("0.0.0.0", 8083), WorkerConfig::new("0.0.0.0", 8084)],
host: "0.0.0.0".into(),
port: 8082,
timeout_secs: 100,
Expand Down Expand Up @@ -77,55 +71,6 @@ impl ProxyConfig {
.extract()
.map_err(|err| format!("Failed to load {} config file: {err}", config_path.display()))
}

/// Saves the configuration to the config file
///
/// This method will serialize the configuration to a TOML string and write it to the file with
/// the name defined at the [PROVER_SERVICE_CONFIG_FILE_NAME] constant in the current directory.
pub(crate) fn save_to_config_file(&self) -> Result<(), String> {
let mut current_dir = std::env::current_dir().map_err(|err| err.to_string())?;
current_dir.push(PROVER_SERVICE_CONFIG_FILE_NAME);
let config_path = current_dir.as_path();

let config_as_toml_string = toml::to_string_pretty(self)
.map_err(|err| format!("error formatting config: {err}"))?;

let mut file_handle = File::options()
.write(true)
.truncate(true)
.open(config_path)
.map_err(|err| format!("error opening the file: {err}"))?;

file_handle
.write(config_as_toml_string.as_bytes())
.map_err(|err| format!("error writing to file: {err}"))?;

debug!("Config updated successfully");

Ok(())
}

/// Updates the workers in the configuration with the new list.
pub(crate) fn set_workers(workers: Vec<WorkerConfig>) -> Result<(), String> {
let mut proxy_config = Self::load_config_from_file()?;

proxy_config.workers = workers;

proxy_config.save_to_config_file()
}
}

/// Configuration for a worker
#[derive(Serialize, Deserialize)]
pub struct WorkerConfig {
pub host: String,
pub port: u16,
}

impl WorkerConfig {
pub fn new(host: &str, port: u16) -> Self {
Self { host: host.into(), port }
}
}

/// Root CLI struct
Expand Down Expand Up @@ -156,13 +101,11 @@ pub enum Command {
StartProxy(StartProxy),
/// Adds workers to the proxy.
///
/// This method will make a request to the proxy defined in the config file to add workers. It
/// will update the configuration file with the new list of workers.
/// This method will make a request to the proxy defined in the config file to add workers.
AddWorkers(AddWorkers),
/// Removes workers from the proxy.
///
/// This method will make a request to the proxy defined in the config file to remove workers.
/// It will update the configuration file with the new list of workers.
RemoveWorkers(RemoveWorkers),
}

Expand Down
19 changes: 13 additions & 6 deletions bin/tx-prover/src/commands/proxy.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,32 @@ use pingora_proxy::http_proxy_service;
use crate::proxy::{LoadBalancer, LoadBalancerState};

/// Starts the proxy defined in the config file.
///
/// Example: `miden-tx-prover start-proxy 0.0.0.0:8080 127.0.0.1:9090`
#[derive(Debug, Parser)]
pub struct StartProxy;
pub struct StartProxy {
/// List of workers as host:port strings.
///
/// Example: `127.0.0.1:8080 192.168.1.1:9090`
#[clap(value_name = "WORKERS")]
workers: Vec<String>,
}

impl StartProxy {
/// Starts the proxy defined in the config file.
///
/// This method will first read the config file to get the list of workers to start. It will
/// then start a proxy with each worker as a backend.
/// This method will first read the config file to get the parameters for the proxy. It will
/// then start a proxy with each worker passed as command argument as a backend.
pub async fn execute(&self) -> Result<(), String> {
let mut server = Server::new(Some(Opt::default())).map_err(|err| err.to_string())?;
server.bootstrap();

let proxy_config = super::ProxyConfig::load_config_from_file()?;

let workers = proxy_config
let workers = self
.workers
.iter()
.map(|worker| format!("{}:{}", worker.host, worker.port))
.map(|worker| Backend::new(&worker).map_err(|err| err.to_string()))
.map(|worker| Backend::new(worker).map_err(|err| err.to_string()))
bobbinth marked this conversation as resolved.
Show resolved Hide resolved
.collect::<Result<Vec<Backend>, String>>()?;

let worker_lb = LoadBalancerState::new(workers, &proxy_config).await?;
Expand Down
21 changes: 0 additions & 21 deletions bin/tx-prover/src/proxy/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -110,12 +110,8 @@ impl LoadBalancerState {
/// - If the worker exists in the current workers list, remove it.
/// - Otherwise, do nothing.
///
/// Finally, updates the configuration file with the new list of workers.
///
/// # Errors
/// - If the worker cannot be created.
/// - If the configuration cannot be loaded.
/// - If the configuration cannot be saved.
pub async fn update_workers(
&self,
update_workers: UpdateWorkers,
Expand Down Expand Up @@ -152,11 +148,6 @@ impl LoadBalancerState {
},
}

let new_list_of_workers =
workers.iter().map(|worker| worker.try_into()).collect::<Result<Vec<_>, _>>()?;

ProxyConfig::set_workers(new_list_of_workers)?;

info!("Workers updated: {:?}", workers);

Ok(())
Expand Down Expand Up @@ -556,7 +547,6 @@ impl BackgroundService for LoadBalancerState {
///
/// # Errors
/// - If the worker has an invalid URI.
/// - If a [WorkerConfig] cannot be created from a given [Worker].
fn start<'life0, 'async_trait>(
&'life0 self,
_shutdown: ShutdownWatch,
Expand All @@ -575,17 +565,6 @@ impl BackgroundService for LoadBalancerState {
// Update the worker list with healthy workers
*workers = healthy_workers;

// Persist the updated worker list to the configuration file
let worker_configs = workers
.iter()
.map(|worker| worker.try_into())
.collect::<Result<Vec<_>, _>>()
.expect("Failed to convert workers to worker configs");

if let Err(err) = ProxyConfig::set_workers(worker_configs) {
error!("Failed to update workers in the configuration file: {}", err);
}

// Sleep for the defined interval before the next health check
sleep(self.health_check_frequency).await;
}
Expand Down
13 changes: 1 addition & 12 deletions bin/tx-prover/src/proxy/worker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ use tonic_health::pb::{
};
use tracing::error;

use crate::{commands::WorkerConfig, utils::create_health_check_client};
use crate::utils::create_health_check_client;

// WORKER
// ================================================================================================
Expand Down Expand Up @@ -72,14 +72,3 @@ impl PartialEq for Worker {
self.backend == other.backend
}
}

impl TryInto<WorkerConfig> for &Worker {
type Error = String;

fn try_into(self) -> std::result::Result<WorkerConfig, String> {
self.backend
.as_inet()
.ok_or_else(|| "Failed to get worker address".to_string())
.map(|worker_addr| WorkerConfig::new(&worker_addr.ip().to_string(), worker_addr.port()))
}
}
Loading