Skip to main content

Multi-Channel Chat App

info

Please make sure that you've completed and are comfortable with the quickstart before moving on to any of these guides.

This guide will walk you through designing, building, and deploying a multi-channel chat server application to run in your Cosmonic constellation.

About the Application

It's fairly common to see chat applications used as a way of demonstrating distributed applications. We also often see them used to demonstrate real-time functionality or streaming. The problem with many of these applications is that they are comprised of a handful of tightly coupled monoliths. This pattern is often referred to as a distribulith (distributed monolith).

We wanted to build a chat application to show how easy it is to do using wasmCloud and Cosmonic tooling. While we were there, we wanted to show how easy it is to step things up with the simple development model. As such, this chat application provides a unified chat infrastructure for clients connecting using a message broker and clients connecting using a traditional RESTful HTTP endpoint.

Architecture and Design

Multi-Channel Chat Application Architecture

The multi-channel chat application facilitates a persistent chat across multiple channels. In this version, there is no strict authentication - clients are able to self-identify in order to distinguish one orginator from another. In subsequent versions we might add stronger security. The application is composed of the following actors:

Messaging Channel

This actor, bound to a message broker provider, is responsible for handling incoming messages from chat clients and delivering messages to subscribing clients. It will handle things like the sign-on message and a standard chat message payload from clients. It will also accept an actor-to-actor call from the chat log actor which is used to deliver messages to listening clients.

As an example, someone might issue a POST to the HTTP endpoint. This makes its way through the API gateway actor and into the chat log actor for storage. The API gateway actor then sends a message to the messaging channel actor. If there are any clients interested in that message in the attached message broker, that message will be delivered to them.

HTTP Channel

The HTTP channel is used to accept a sign-on message to identify clients. In addition, it accepts POST messages containing chat data. These messages are then sent to the API gateway for processing. Finally, this actor exposes a GET endpoint that allows clients to query the chat log history.

API Gateway

The API gateway exposes a unified interface into the chat "back-end". In the current example, there are two channels: message broker and HTTP. If we were to add more channels, they could be added easily and then consume the API gateway actor's exposed API. The gateway accepts incoming messages with an origin field, which will be different depending on which channel handled the message.

The API gateway also acts as a relay, publishing outbound messages to the appropriate channels.

Chat Log

The chat log actor provides basic read/write operations on the persistent chat log, which is stored in some key-value provider which can be bound at runtime.

Implementation Walkthrough

The following guide assumes you're starting from scratch, but will ask you to make changes to your code and configuration files. If at any time you want to see the full example as a reference, take a look at the things-to-build repository.

Prerequisites

  1. An active Cosmonic account and a constellation you can use.
  2. wash from the wasmCloud installation guide, at least version 0.13.0
  3. A Rust toolchain from the rust-lang website
  4. The wasm32-unknown-unknown target, by running rustup target add wasm32-unknown-unknown after installing Rust
  5. A local installation of Redis or an equivalent running docker image

Implementation

To start, we'll use wash to generate all of the actors we're going to use for this project. These templates will come with a bit more than we'll need, so some of the work will be deleting code. Create a new root directory for the project. We used multi-channel-chat.

wash new actor chatlog --template-name hello
wash new actor api-gateway --template-name hello
wash new actor nats-channel --template-name hello
wash new actor http-channel --template-name hello

Setup

Before we get going with the rest of the build, copy the files from the interface directory into your own interface directory below the project root. This contains the .smithy files that define the services and schemas for all of our actor-to-actor interaction.

Your directory structure should now look something like this:

+-+ multi-channel-chat
|
+---+ chatlog
+---+ api-gateway
+---+ nats-channel
+---+ http-channel
+---+ interface

Building the Chatlog Actor

Open up the directory where you created the chat log actor. First, delete the Makefile because we can use wash build to compile. Let's edit the wasmcloud.toml file to be as follows:

name = "chatlog"
language = "rust"
type = "actor"
version = "0.1.0"

[actor]
claims = ["cosmonic:samples:chatlog", "wasmcloud:keyvalue", "wasmcloud:builtin:logging"]
call_alias = "mcchat/chatlog"

The call alias is going to be important for when we do actor-to-actor calls.

Change the package name in Cargo.toml to chatlog and make sure you have the following dependencies:

[dependencies]
async-trait = "0.1"
futures = "0.3"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.86"
wasmbus-rpc = "0.10"
wasmcloud-interface-keyvalue = "0.8.0"
wasmcloud-interface-logging = "0.7.1"

[dev-dependencies]
base64 = "0.13"

# build-dependencies needed for build.rs
[build-dependencies]
weld-codegen = "0.5.0"

We're embracing the share nothing pattern here and so instead of creating a shared crate upon which all our actors depend, each one will generate its own code based on the shared schemas. This will ultimately pay off in reducing friction when each of these actors has to change on its own cadence.

This means we'll need a build.rs and a codegen.toml (both go in the chatlog project root, e.g. ./multi-channel-chat/chatlog).

Here's our build.rs:

const CONFIG: &str = "./codegen.toml";

fn main() -> Result<(), Box<dyn std::error::Error>> {
weld_codegen::rust_build(CONFIG)?;
Ok(())
}

And our codegen.toml:

[[models]]
path = "../interface"
files = [
"chatlog.smithy",
]

[[models]]
url = "https://wasmcloud.github.io/interfaces/idl/org.wasmcloud"
files = [ "wasmcloud-core.smithy", "wasmcloud-model.smithy" ]

[html]
output_dir = "html"
[html.parameters]
minified = true

[rust]
output_dir = "."

[rust.parameters]

[[rust.files]]
path = "src/chatlog.rs"
namespace = "com.cosmonic.samples.mcchat.chatlog"

We don't need to yet, but if we were to compile the actor right now, we'd get the newly generated file src/chatlog.rs.

The purpose of the chatlog actor is to store and retrieve messages from the persistent chat log utilizing the key-value abstraction. To keep that nice and tidy, let's put the following code in src/store.rs:

use crate::chatlog::*;
use wasmbus_rpc::actor::prelude::*;
use wasmcloud_interface_keyvalue::*;
use wasmcloud_interface_logging::info;

const MESSAGE_LIST_KEY: &str = "chatlog:messages";

pub(crate) async fn write_message(ctx: &Context, msg: &CanonicalChatMessage) -> RpcResult<()> {
let raw = serde_json::to_string(msg).map_err(|e| RpcError::Ser(format!("{}", e)))?;

let kv = KeyValueSender::new();
info!("Storing chat log message '{}'", raw);
let _ = kv
.list_add(
ctx,
&ListAddRequest {
list_name: MESSAGE_LIST_KEY.to_string(),
value: raw,
},
)
.await?;

Ok(())
}

pub(crate) async fn get_messages(ctx: &Context) -> RpcResult<Vec<CanonicalChatMessage>> {
let kv = KeyValueSender::new();

kv.list_range(
ctx,
&ListRangeRequest {
list_name: MESSAGE_LIST_KEY.to_string(),
start: 0,
stop: 9999,
},
)
.await
.map(|res| {
res.iter()
.filter_map(|s| match serde_json::from_str(s.as_str()) {
Ok(v) => Some(v),
Err(_) => None,
})
.collect::<Vec<CanonicalChatMessage>>()
})
}

What is remarkable about this code is what isn't in there. The intent of the developer is crystal clear, and the code isn't burdened by a mountain of ceremony and boiler plate involving connections, connection pools, credentials, retry attempts, logging, tracing, ... the list goes on.

In this code, we will serialize an incoming message to a string and add it to a list. To query the list of messages, we just map the string value of each item stored to the CanonicalChatMessage type (which we defined in the ../interface/chatlog.smithy file). There are some divergences from what we might ship in a real-world application, like limiting the results to a maximum of 9999 (ideally we'd want to use some kind of cursor or paging mechanism). Additionally, we're assuming that the list results are going to come back in chronological order and they might not if we change capability providers.

Now that we've defined our store.rs module, let's take a look at src/lib.rs:

use chatlog::*;
use wasmbus_rpc::actor::prelude::*;

#[allow(dead_code)]
mod chatlog;

mod store;

#[derive(Debug, Default, Actor, HealthResponder)]
#[services(Actor, Chatlog)]
struct ChatlogActor {}

/// Implementation of Chat Log actor trait methods
#[async_trait]
impl Chatlog for ChatlogActor {
async fn write_message(
&self,
ctx: &Context,
arg: &CanonicalChatMessage,
) -> RpcResult<WriteMessageResponse> {
Ok(match store::write_message(ctx, arg).await {
Ok(_) => WriteMessageResponse {
accepted: true,
reason: None,
},
Err(e) => WriteMessageResponse {
accepted: false,
reason: Some(format!("{}", e)),
},
})
}

async fn get_messages(&self, ctx: &Context) -> RpcResult<MessagesList> {
Ok(match store::get_messages(ctx).await {
Ok(v) => v,
Err(_) => vec![],
})
}
}

Again worth noting is the sheer (lack of) size of this codebase. There's a subtle architectural decision here worth mentioning. We do not return an Err from any of these functions. The RpcResult type assumes that the only time you return an Err is when a catastrophic communication failure occurred. If something bad, but manageable, happens, then we return the error indication in the payload itself.

In this case, we have an accepted field. The wasmCloud infrastructure is aware of when these trait functions return Err and that could potentially cause hosts to react in a way you didn't expect. Returning accepted: false here means The internal plumbing is working fine, but we couldn't complete your request.

info

As a general rule, never return Err from RpcResult<T> functions unless something catastrophic occurred. Do not use Err for validation failures.

The easiest way to start this actor for development is to run wash build and then use the wasmCloud dashboard at http://localhost:4000 to start a local file (with hot reloading). You'll want to do that for all the actors until we deploy to Cosmonic. You'll need a running wasmCloud host (you can use wash up for this) to get to this dashboard.

Building the API Gateway Actor

To keep from boring you with repetitive copy and paste, we suggest that you follow these quick steps to speed up the creation of this actor:

  • Copy the Cargo.toml file from the sample repo
  • Copy the wasmcloud.toml file from the sample repo
  • Delete the Makefile
  • Copy the build.rs file from the sample repo

Remember that the role of the API gateway actor is:

  • Accept incoming messages from arbitrary channels
  • Relay those same messages to all other channels
  • Relay those messages to the chat log actor for persistence

Doing things the old way, you can imagine how ugly and complex this code might look. But this is all that you need to write (src/lib.rs):

use chatlog::*;
use outbound::{Outbound, OutboundMessage, OutboundSender};
use wasmbus_rpc::actor::prelude::*;
use wasmcloud_interface_logging::error;

#[allow(dead_code)]
mod chatlog;

#[allow(dead_code)]
mod outbound;

const CHATLOG_ACTOR:&str = "mcchat/chatlog";

#[derive(Debug, Default, Actor, HealthResponder)]
#[services(Actor, Chatlog)]
struct ApiGatewayActor {}

const KNOWN_CHANNEL_NAMES: &[&str] = &["http", "nats"];

/// Implementation of Chat Log actor trait methods
#[async_trait]
impl Chatlog for ApiGatewayActor {
async fn write_message(
&self,
ctx: &Context,
arg: &CanonicalChatMessage,
) -> RpcResult<WriteMessageResponse> {
let chatlog = ChatlogSender::to_actor(CHATLOG_ACTOR);

let res = chatlog.write_message(ctx, arg).await;

// Select every channel that isn't the one that just called
let mut targets = KNOWN_CHANNEL_NAMES.to_vec();
targets.retain(|c| *c != arg.channel_name);

for channel in targets {
let outbound = OutboundSender::to_actor(&format!("mcchat/{}", channel));
let _ = outbound
.publish_message(
ctx,
&OutboundMessage {
body: arg.body.to_string(),
source_channel: arg.channel_name.to_string(),
source_user: arg.source_user.to_string(),
},
)
.await;
}
res
}

async fn get_messages(&self, ctx: &Context) -> RpcResult<MessagesList> {
let chatlog = ChatlogSender::to_actor(CHATLOG_ACTOR);

chatlog.get_messages(ctx).await
}
}

In this code, we call chatlog.write_message(ctx, arg) to send the message to the chat log actor for persistence. This is an actor-to-actor call that uses a call alias of mcchat/chatlog. Any running instance of the chat log actor, anywhere in our constellation, can answer that call.

Next we select a channel actor to receive the real-time notification that a new message arrived:

let outbound = OutboundSender::to_actor(&format!("mcchat/{}", channel));

So to send to the NATS channel, we use mcchat/nats and mcchat/http for the HTTP channel. Note that we don't create a blocking ouroboros loop here because we only call the other channels, not the one from which the initial message came.

tip

A subtle but amazing thing here is that both the API Gateway and the Chatlog actors are receivers for the Chatlog service. This means that you can essentially have distributed structured typing (interface implementation) across actors. Multiple actors all handle the write_message function, but they do different things with it. In our case, the API Gateway is a proxy while the Chat log actor is the "real" thing.

Building the NATS Channel Actor

To keep from boring you with repetitive copy and paste, we suggest that you follow these quick steps to speed up the creation of this actor:

  • Copy the Cargo.toml file from the sample repo
  • Copy the wasmcloud.toml file from the sample repo
  • Delete the Makefile
  • Copy the build.rs file from the sample repo

When you build this actor, you'll notice that you have two additional generated modules: outbound and chatlog. The NATS channel actor receives the outbound service while it calls/invokes the chatlog service. It accepts inbound NATS messages and relays them to the API gateway, while also accepting publications of real-time notification of messages as they arrive on other channels.

Replace src/lib.rs with the following:

use chatlog::*;
use outbound::*;
use serde::{Deserialize, Serialize};
use wasmbus_rpc::actor::prelude::*;
use wasmcloud_interface_messaging::*;
use wasmcloud_interface_numbergen::*;

#[allow(dead_code)]
mod chatlog;

#[allow(dead_code)]
mod outbound;

const CHANNEL_NAME: &str = "nats";
const API_ACTOR: &str = "mcchat/api";
const RTCHAT_TOPIC: &str = "rtchat.messages";

#[derive(Debug, Default, Actor, HealthResponder)]
#[services(Actor, MessageSubscriber, Outbound)]
struct NatsChannelActor {}

#[async_trait]
impl Outbound for NatsChannelActor {
async fn publish_message(&self, ctx: &Context, arg: &OutboundMessage) -> RpcResult<bool> {
let pubber = MessagingSender::new();
let res = pubber
.publish(
ctx,
&PubMessage {
subject: RTCHAT_TOPIC.to_string(),
reply_to: None,
body: serde_json::to_vec(arg).unwrap(),
},
)
.await;
Ok(res.is_ok())
}
}

/// Implementation of Chat Log actor trait methods
#[async_trait]
impl MessageSubscriber for NatsChannelActor {
async fn handle_message(&self, ctx: &Context, msg: &SubMessage) -> RpcResult<()> {
let reply_topic = msg
.reply_to
.as_ref()
.unwrap_or(&"dedletter".to_string())
.to_string();
match serde_json::from_slice::<IncomingMessage>(&msg.body) {
Ok(im) => {
let logger = ChatlogSender::to_actor(API_ACTOR);
let numgen = NumberGenSender::new();
let guid = numgen.generate_guid(ctx).await.unwrap_or("n/a".to_string());
match logger
.write_message(
ctx,
&CanonicalChatMessage {
body: im.body,
channel_name: CHANNEL_NAME.to_string(),
id: guid,
source_user: im.user_name,
},
)
.await
{
Ok(resp) => {
let ack = Ack {
accepted: resp.accepted,
reason: resp.reason,
};
let _ = pub_ack(ctx, &reply_topic, ack).await;
}
Err(e) => {
let _ = pub_fail(ctx, &reply_topic, format!("{}", e)).await;
}
}
}
Err(_) => {
let _ = pub_fail(ctx, &reply_topic, "Bad Request".to_string()).await;
}
}
Ok(())
}
}

async fn pub_ack(ctx: &Context, topic: &str, ack: Ack) {
let publisher = MessagingSender::new();
let _ = publisher
.publish(
ctx,
&PubMessage {
subject: topic.to_string(),
reply_to: None,
body: serde_json::to_vec(&ack).unwrap(),
},
)
.await;
}

async fn pub_fail(ctx: &Context, topic: &str, msg: String) {
pub_ack(
ctx,
topic,
Ack {
accepted: false,
reason: Some(msg),
},
)
.await
}

#[derive(Debug, Serialize, Deserialize, Clone)]
struct IncomingMessage {
user_name: String,
body: String,
}

#[derive(Debug, Serialize, Deserialize, Clone)]
struct Ack {
accepted: bool,
#[serde(skip_serializing_if = "Option::is_none")]
reason: Option<String>,
}

The vast majority of this code is actually the carriage returns injected by the cargo fmt option 🙂

The most important thing to see here is that this actor implements two service traits:

impl Outbound for NatsChannelActor { ... }
impl MessageSubscriber for NatsChannelActor { ... }

This makes it clear to any developer that this actor can be called on the Outbound interface as well as receive messages from any capability provider that implements wasmcloud:messaging.

Running wash build will compile and sign everything you need.

Building the HTTP Channel Actor

As with the other actors, let's start off by copying some files into the directory:

  • Copy the Cargo.toml file from the sample repo
  • Copy the wasmcloud.toml file from the sample repo
  • Delete the Makefile
  • Copy the build.rs file from the sample repo

This actor responds to HTTP requests that will either submit a new message or query for the list of all messages. As with other actors, you'll see the src/outbound.rs and src/chatlog.rs files automatically generated during build. The important part is src/lib.rs, as shown below:

use chatlog::*;
use outbound::*;
use serde::{Deserialize, Serialize};
use wasmbus_rpc::actor::prelude::*;
use wasmcloud_interface_httpserver::*;
use wasmcloud_interface_logging::debug;
use wasmcloud_interface_numbergen::*;

#[allow(dead_code)]
mod chatlog;

#[allow(dead_code)]
mod outbound;

const CHANNEL_NAME: &str = "http";
const API_ACTOR: &str = "mcchat/api";

#[derive(Debug, Default, Actor, HealthResponder)]
#[services(Actor, HttpServer, Outbound)]
struct HttpChannelActor {}

#[async_trait]
impl Outbound for HttpChannelActor {
async fn publish_message(&self, ctx: &Context, arg: &OutboundMessage) -> RpcResult<bool> {
// This is absorbed silently because the HTTP channel does not currently expose
// any kind of realtime subscription. Perhaps in the future a websocket subscription
// could be used?
Ok(true)
}
}

/// Implementation of HttpServer trait methods
#[async_trait]
impl HttpServer for HttpChannelActor {
async fn handle_request(&self, ctx: &Context, req: &HttpRequest) -> RpcResult<HttpResponse> {
let path = &req.path[1..req.path.len()];
let segments: Vec<&str> = path.trim_end_matches('/').split('/').collect();
match (req.method.as_ref(), segments.as_slice()) {
("POST", ["messages"]) => write_message(ctx, deser(&req.body)?).await,
("GET", ["messages"]) => get_messages(ctx).await,
(m, p) => {
debug!("unexpected method and path: {} - {:?}", m, p);
Ok(HttpResponse::not_found())
},
}
}
}

async fn write_message(ctx: &Context, im: IncomingMessage) -> RpcResult<HttpResponse> {
// Go to the gateway, not directly to the chat logger
let logger = ChatlogSender::to_actor(API_ACTOR);
let numgen = NumberGenSender::new();
let guid = numgen.generate_guid(ctx).await.unwrap_or("n/a".to_string());

logger
.write_message(
ctx,
&CanonicalChatMessage {
body: im.body,
channel_name: CHANNEL_NAME.to_string(),
id: guid,
source_user: im.user_name,
},
)
.await
.map(|r| r.into())
}

async fn get_messages(ctx: &Context) -> RpcResult<HttpResponse> {
let logger = ChatlogSender::to_actor(API_ACTOR);
match logger.get_messages(ctx).await {
Ok(r) => HttpResponse::json(r, 200),
Err(e) => Ok(HttpResponse::internal_server_error(format!("{}", e))),
}
}

fn deser<'de, T: Deserialize<'de>>(raw: &'de [u8]) -> RpcResult<T> {
serde_json::from_slice(raw).map_err(|e| RpcError::Deser(format!("{}", e)))
}

#[derive(Debug, Serialize, Deserialize, Clone)]
struct IncomingMessage {
user_name: String,
body: String,
}

#[derive(Debug, Serialize, Deserialize, Clone)]
struct Ack {
accepted: bool,
#[serde(skip_serializing_if = "Option::is_none")]
reason: Option<String>,
}

impl From<chatlog::WriteMessageResponse> for HttpResponse {
fn from(source: chatlog::WriteMessageResponse) -> Self {
if source.accepted {
HttpResponse::default()
} else {
HttpResponse::internal_server_error(source.reason.unwrap_or_else(|| "".to_string()))
}
}
}

The code is fairly simple. We actually do nothing for the Outbound service contract implementation because there's no real-time publication over the HTTP channel (at least for this sample. We know that WebSockets are a possible enhancement). When handling an HTTP request, we check the route path and see if the consumer is querying the message list or submitting a new one. In either case, we make a call on the API gateway actor and return an appropriate result.

Run wash build to compile and sign your actor.

Running Locally

Running thie example locally is pretty easy. Simply follow these steps:

  • Ensure that you have a wasmCloud host running. The easiest way to do this is to run wash up
  • Go to http://localhost:4000 and use the wasmCloud dashboard to start (from file) these actors:
    • HTTP Channel
    • NATS Channel
    • API Gateway
    • Chat log
  • Start the following providers from their OCI references:
    • HTTP server - wasmcloud.azurecr.io/httpserver:0.16.3
    • NATS provider - wasmcloud.azurecr.io/nats_messaging:0.14.5
    • Redis provider - wasmcloud.azurecr.io/kvredis:0.17.0
  • Add a link definition between the HTTP Channel and the HTTP Server. Set the configuration value address to 0.0.0.0:8080
  • Add a link definition between the NATS Channel and the NATS capability provider. Set SUBSCRIPTION to chat.in
  • Add a link definition between the Chat log actor and the redis provider. Set URL equal to redis://127.0.0.1:6379 (assuming your redis server is on that port)

With it all set up, you should have a wasmCloud dashboard that looks like the following:

XKCD Image Generator Design

To use this application, you just need to exercise the channels. For example, with the nats CLI installed, you can subscribe to a NATS topic like rtchat.messages (the topic on which the NATS channel publishes):

nats sub rtchat.messages
09:27:43 Subscribing on rtchat.messages
[#1] Received on "rtchat.messages"
{"body":"this is from HTTP","sourceChannel":"http","sourceUser":"Bob"}

[#2] Received on "rtchat.messages"
{"body":"this is from HTTP 2","sourceChannel":"http","sourceUser":"Bob"}

[#3] Received on "rtchat.messages"
{"body":"this is from HTTP 3","sourceChannel":"http","sourceUser":"Bob"}

[#4] Received on "rtchat.messages"
{"body":"this is from HTTP 4","sourceChannel":"http","sourceUser":"Bob"}

As messages come in from the HTTP channel, you'll see them immediately appear here.

To query the message history in the app, you can use curl:

curl localhost:8080/messages | jq
[
{
"body": "This is a test",
"channelName": "nats",
"id": "08dfc10a-ef2f-4ea0-a4b5-a32acae44a57",
"sourceUser": "Kevin"
},
{
"body": "This is a test",
"channelName": "nats",
"id": "91eb9f88-fe8a-4f9f-b3a8-034ecd3caf96",
"sourceUser": "Kevin"
},
{
"body": "this is from HTTP",
"channelName": "http",
"id": "d3f9c3b4-9e88-43bb-9fba-a3250b452c5c",
"sourceUser": "Bob"
},
{
"body": "this is from HTTP",
"channelName": "http",
"id": "c71f4e45-9d6f-4b01-a145-a9937804a62d",
"sourceUser": "Bob"
}
]

And finally, you can use curl to post messages to the chat log (which you'll see both in redis and in the NATS channel):

curl -X POST http://localhost:8080/messages -d '{"user_name": "tester", "body": "hello!"}'

And if you're still subscribing to new messages in another terminal, you'll see this one appear:

16:14:17 Subscribing on rtchat.messages
[#1] Received on "rtchat.messages"
{"body":"hello!","sourceChannel":"http","sourceUser":"tester"}

We didn't build a user interface for this chat sample because we wanted to focus specifically on the core elements of building this distributed application. Subsequent tutorials may guide you through adding various UI elements to this app.

Running in Cosmonic

Before you start this section, make sure you don't have any running wasmCloud hosts.

Running this in Cosmonic is where the real fun starts, especially since we'll get to use a super constellation!

To start, go to your constellation's blank canvas (make sure you have a host running). On your host, start the following:

  • The wormhole-compliant HTTP server provider - cosmonic.azurecr.io/httpserver_wormhole:0.4.0. You'll be able to start this on your Cosmonic host, but you will not be able to launch this one locally. (It's only available in our infrastructure)

That's it! Now, bring up a wasmCloud host that is auto-joined into your constellation by typing cosmo up (This tool is being released as part of the Developer Preview. Check on our Discord if you don't have it). If needed, follow the directions to log in, then sit back and bask in the glow as you look at your Cosmonic canvas and see the locally running host magically appear in your constellation.

At this point, there's a host running in Cosmonic (with the wormhole-aware HTTP server provider) and one running on your workstation. The constellation (and all relevant tooling) simply doesn't care where anything's running... it's all one flat topology.

Follow the directions from the previous running locally section to start the following on your local host:

  • The Chatlog actor
  • The HTTP Channel actor
  • The NATS Channel actor
  • The API Gateway actor
  • The Redis capability provider
  • The NATS capability provider

Do not start a local HTTP server.

Go through the steps to add link definitions the same as you did when running locally, however, instead of linking to the regular HTTP server provider, link the HTTP channel actor to the wormhole-aware HTTP server provider. You won't need to provide any configuration values for that one since we manage it for you.

Now head over to your constellation canvas and click the "add wormhole" button. This will bring up a dialog that looks like this:

Creating a Wormhole Dialog
Wormhole dialog

For more details on wormholes, check out the relevant section in our user guide

Choose the actor (you'll want HTTP Channel) and choose the provider (HTTP Server (wormhole)). To keep things simple, don't choose to require authentication. Once created, you should see your wormhole appear on the canvas. Click it and you'll bring up the details panel on the right side. In the details panel, you'll see a link to access your wormhole. The host name and IDs will vary, but you'll see something similar to this:

Wormhole Details Panel
Wormhole details

Make sure that you're subscribing to your (local) NATS provider's outbound message topic (rtchat.messages).

Copy the URL for that wormhole and use it to build a curl command like this:

curl -X POST https://icy-star-8401.cosmonic.app/messages -d '{"user_name": "Me", "body": "Hello everyone!"}'

Now you should see a message come in on your NATS topic! 🧠💥

To really appreciate what just happened, let's recap:

  • You deployed a capability provider in a Cosmonic managed host
  • You deployed the rest of your application, including Redis and NATS, locally.
  • You exposed an HTTP endpoint via a wormhole
  • When someone accesses that endpoint from the public internet, they go through the wormhole to your local workstation, all without you ever having to open a firewall port or relax security rules.

Alternate Architectures

The possibilities are truly endless. You can use the super constellation architecture to put some of your capabilities in some isolated set of infrastructure, e.g. somewhere you are running a database. You could use this to spread actors across multiple hosts in multiple regions for horizontal scaling and fault tolerance. You could even use this architecture and super constellations to create your own chat channel that comes from a swarm of IoT devices, or create an outbound-only channel that publishes chat messages to LED screens.

We've even used things like the RTL-SDR to receive ADS-B messages telling us where nearby planes are, and we can then convert those updates into chat messages!

The only real limit now is your imagination.

Source Code

All of the code for this application can be found in this GitHub repository.