Creating a placeholder image
After some posts about Terraform and infrastructure as code, this post is a little side-journey into the land of SVG. A description of the image creation algorithm of placeruler.knappi.org
A couple of years ago, at work, we built the digital version of the SPIEL. The SPIEL is the world’s largest trade fair for board games and is usually held large halls of the Fairground Essen.
On particular challenge was the stand-plan, which was designed to look like a board game. A board game designer created a number of combinable tiles in different layers. And instead of manually creating images for each possible combination - and company label - we decided to create them on the fly. First, we use dynamic SVGs to render the different layers in the browser, with the correct blending modes. But that proved to be way too slow. So, we created an AWS Lambda function that uses the sharp-library to produce the images. And we were caching them in CloudFront… I think you might start to see some resemblance here.
If you want to know more about the SPIEL project, have a look at https://www.cosee.biz/references/spiel-digital. I also gave a Lightning Talk together with Patrick Wolf about the mechanisms.
The Placeholder Image
Back to our placeholder images: The nice thing about Sharp is, that it can convert SVGs to PNGs. And creating an SVG should be easy with a little bit of JavaScript, right?
Well, there are some caveats, but let’s start with a very simple version. For that, we update the Lambda handler to return an SVG instead of plain text.
export const handler = streamifyResponse(async (event, responseStream, context) => {
responseStream = awslambda.HttpResponseStream.from(responseStream, {
statusCode: 200,
headers: {
"Content-Type": "image/svg+xml",
"Cache-Control": "max-age=6"
}
})
await pipeline(createSvg(100,100), responseStream);
});
function createSvg(width: number, height: number) {
return Readable.from(`
<svg xmlns="http://www.w3.org/2000/svg" width="${width}" height="${height}">
<ellipse cx="0" cy="0" rx="${width}" ry="${height}" fill="blue"/>
</svg>
`);
}
Deploying this works quite well, but if we want to create more complex images, we will encounter errors and the feedback loop of deploying and testing is quite slow. So we should create a local development environment.
Local Development
The lambda-stream
package uses a local implementation of streamifyResponse
if the code does not run in a Lambda environment. But it does not provide a
complete polyfill. The code
responseStream = awslambda.HttpResponseStream.from(responseStream, {
/* ... */
});
does not work, because awslambda
is not defined. My approach was to create
another layer of abstraction:
interface HandleRequestInput {
/* to be defined */
}
interface HandleRequestReturn {
statusCode: number;
headers: Record<string, string>;
body: Readable;
}
export async function handleRequest(
event: HandleRequestInput,
): Promise<HandleRequestReturn> {
return {
statusCode: 200,
headers: { "Content-Type": "image/svg+xml" },
body: createSvg(100, 100),
};
}
Once we have this, we can use it in the Lambda handler
export const handler = streamifyResponse(
async (event, responseStream, context) => {
const { statusCode, headers, body } = await handleRequest(event);
await pipeline(
body,
awslambda.HttpResponseStream.from(responseStream, {
statusCode,
headers,
}),
);
},
);
and in the local development environment.
import { Server } from "node:http";
import { handleRequest } from "./handleRequest";
import { pipeline } from "node:stream/promises";
const server = new Server((req, res) => {
handleRequest({ rawPath: req.url ?? "" })
.then(async ({ statusCode, headers, body }) => {
res.writeHead(statusCode, headers);
await pipeline(body, res);
})
.catch((err) => {
console.error(err);
res.writeHead(500);
res.end();
});
});
server.listen(3000, () => {
console.log("Server started");
});
Note that I have not yet defined the HandleRequestInput
interface. As we go
along, we can add properties to this object, but we should only define the
properties that we really need. Otherwise, we have to mock the whole
event-object in the local development environment.
For example, in order to parse the image size from the URL path, we need to
rawPath
property from the Lambda event. We can just add it to the
HandleRequestInput
interface and implement the handleRequest
function with
proper error handling.
export async function handleRequest(
input: HandleRequestInput,
): Promise<HandleRequestReturn> {
const match = input.rawPath.match(/(\d+)x(\d+)/);
if (!match) {
return {
statusCode: 400,
headers: { "Content-Type": "text/plain" },
body: Readable.from("Size specification missing"),
};
}
const width = Number(match[1]);
const height = Number(match[2]);
if (width <= 0 || height <= 0) {
return {
statusCode: 400,
headers: { "Content-Type": "text/plain" },
body: Readable.from("Invalid size"),
};
}
return {
statusCode: 200,
headers: { "Content-Type": "image/svg+xml" },
body: createSvg(width, height),
};
}
While we do go on developing the service, we can run the dev-environment with
tsx --watch
, getting the feedback loop that we are used to when developing
with live-reload.
Right now, deploying the Lambda includes everything in our lambda
directory,
even dev-dependencies like tsx
. The deployment process should omit these
files. I will skip this part here, and leave it up to you to figure this out…
Converting the image to PNG
Now, we can convert the SVG to a PNG using Sharp.
First we need to install sharp
in our project:
npm install sharp
Then, we just need to change the handleRequest
function’s default path from:
return {
statusCode: 200,
headers: { "Content-Type": "image/svg+xml" },
body: createSvg(width, height),
};
to
// at the top: import sharp from "sharp";
return {
statusCode: 200,
headers: { "Content-Type": "image/png" },
body: createSvg(width, height).pipe(sharp().png()),
};
I haven’t added any error handling to this pipe. In this case, this is fine.
createSvg
creates a Readable directly from a string and cannot have errors
during streaming. Don’t forget to handle steam errors properly.
Wrong architecture
Now, let’s try to deploy this change to AWS. Run Terraform, open the Lambda URL and …
… something is wrong here. The download-dialog pops open. Testing the Lambda in the AWS Console, reveals and error
Could not load the "sharp" module using the linux-arm64 runtime
The reason for this error is that our Lambda is configured to use an ARM
architecture. We ran npm install
on our development machine, but the Sharp
library uses native code, and native x86 code cannot be executed on ARM. (see
the docs for more
details).
To solve this, we have to include another step before deployment:
npm ci
# The build needs the x86 dependencies
npm run build
# Now install the ARM dependencies
npm ci --cpu=arm64 --os=linux --libc=glib
# After that, run Terraform
Conclusion
In this post, we have learned how to create PNG images on the fly with a little Typescript and the Sharp library.
We have also set up a local development environment that allows us to test our code locally without deploying it. I think there should be better ways to do this. There should be a polyfill to use the streaming AWS Lambda API directly in Node.js. Maybe there is, the world is big.
But nevertheless, I think separating the functional code from the Lambda API is a good thing and possibly allows us to use that code elsewhere as well.
Stay tuned though, the next article will also be about the creating images. Next time we will have fun with fonts, because that is not a trivial matter.