Even though it’s a great service, I’ll not use Amplify (which btw was simplified a little more - see this re:invent talk). I’ll really bring the pieces together myself - which conveniently also allows for a look under the hood.
I’ll follow the JAM-stack pattern, i.e. UI and backend are separate, tokens are used for authentication. I’ll be using these libraries.
import type
First off, let’s call an ensureLogin
method to make sure we have a user logged in already. I put it inside a Pinia store, but that’s totally optional:
const loginInfo = { currentToken: null, currentRefreshToken: null, currentUserName: null};
async function ensureLogin(): Promise<boolean> {
await initPromise; // wait till auth is initialized & make sure we got current info - see below
if (!loginInfo.currentToken) { // redirect to log us in
redirectToLogin();
return false;
}
return true;
}
As you probably know, the way OAuth roughly works browser-side is by redirecting to a login page outside our app, have the user login there, redirect back with a temporary code, our app takes the code to get the “actual” token (aka access token - short-lived) along with a possibility to get a new access token when expired (aka refresh-token). There is also an ID token which has more info on the user - which we don’t need here.
So, as we don’t have a token initially, let’s take on the step of going to the login page. Btw I also did this with a lot of help by the great docs of @badgateway/oauth2-client
import { OAuth2Client, OAuth2Fetch, generateCodeVerifier } from '@badgateway/oauth2-client';
const client = new OAuth2Client({
server: 'https://<your user pool>.auth.eu-central-1.amazoncognito.com/', // slash counts! (your region instead of eu-central-1)
clientId: '<your client ID>',
authorizationEndpoint: 'login', // Cognito
tokenEndpoint: 'token',
});
const redirectUri = 'http://localhost:8080/'; // (or the app, later - can of course use environments, just KISS here)
async function redirectToLogin() {
const codeVerifier = await generateCodeVerifier();
localStorage.setItem("pkce_code_verifier", codeVerifier);
document.location = await client.authorizationCode.getAuthorizeUri({
redirectUri,
codeVerifier,
scope: ['openid', 'email', 'aws.cognito.signin.user.admin'],
});
}
Some comments here:
aws.cognito.signin.user.admin
scope to later pull the usernameNow the user will be presented with a login page by Cognito to log in. Upon successful login, we go back to our redirect URL along with a code from Cognito and our verifier code. Here’s the method to handle that
async function handleRedirectBack() {
if (!location.search?.includes('code')) {
return;
}
const codeVerifier = localStorage.getItem("pkce_code_verifier");
let oauth2Token, oauthUserInfo;
try {
oauth2Token = await client.authorizationCode.getTokenFromCodeRedirect(
document.location as any,
{
redirectUri,
codeVerifier,
}
);
oauthUserInfo = await fetch(client.settings.server + 'oauth2/userInfo', {headers: {'Authorization': 'Bearer ' + oauth2Token.accessToken}}).then(res => res.json());
} catch (error) {
alert("Error returned from authorization server: " + error);
return;
}
localStorage.removeItem("pkce_code_verifier");
window.history.replaceState({}, null, "/");
loginInfo.currentToken = oauth2Token.accessToken;
loginInfo.currentRefreshToken = oauth2Token.refreshToken;
loginInfo.currentUserName = oauthUserInfo?.email;
console.log(loginInfo.currentUserName);
}
initPromises.push(handleRedirectBack());
Some comments here as well:
Now, we’re almost there. A great feature of @badgateway/oauth2-client is providing a modified version of fetch
that injects the token and also takes care of refreshing (if needed). Here’s how you get the modified fetch
:
const fetchWrapper = new OAuth2Fetch({
client,
getNewToken: async () => {
const newOauth2Token = await client.refreshToken({
refreshToken: loginInfo.currentRefreshToken,
} as any);
loginInfo.currentToken = newOauth2Token.accessToken;
loginInfo.currentRefreshToken = newOauth2Token.refreshToken;
return newOauth2Token;
},
onError: (error) => alert("Error returned from authorization server: " + error),
});
The fetchWrapper.fetch
method can be called just like a regular fetch
now.
Finally, when it comes to init:
const initPromise = Promise.all(initPromises);
This allows us to wait until we ran through the init steps before checking for login (again) and then poss. making calls to the API.
With that, we have everything in place to make calls to APIs secured by our Cognito user pool. If we were using REST (e.g. with API Gateway), we’d be done already. As we want to get GraphQL in, let’s create an @urql/core client and get GraphQL rolling.
You can create a basic client like this (using the fetchWrapper
from above):
import { Client as GraphqlClient, fetchExchange } from '@urql/core'
const graphqlClient = new GraphqlClient({
exchanges: [fetchExchange],
url: 'https://<our API>.appsync-api.eu-central-1.amazonaws.com/graphql', // (your region instead of eu-central-1)
fetch: fetchWrapper.fetch.bind(fetchWrapper),
});
Which is pretty straightforward already - now, we can do a call like this (assuming we have a query owned
returning Items
with id
and title
in each item):
import { gql } from '@urql/core'
const ownedQuery = gql`query {
owned {
Items {
id
title
}
}
}`
const ownedInfo = await graphqlClient.query(ownedQuery, null);
if (ownedInfo.error) {
console.error(ownedInfo.error);
return;
}
ownedInfo.data?.owned?.Items // do sth with it
The backtick syntax looks a little strange - in the end, it’s a function call & nothing more. There is btw no pre-processing or sth needed, but let’s stick to the syntax that’s customary for GraphQL even though we don’t have to.
As said, types are optional. One can use @graphql-codegen/cli along with @graphql-codegen/typescript to basically create interfaces and enums and have Typescript support development.
There is one big caveat, though: so far, Vite can’t handle the output of @graphql-codegen beyond type declarations - the build in Vite will simply fail. However, it works great with import type
, i.e. instead of
import { TypeFromGqlSchema } from '@/types/my-graphql-types'
use
import type /* !!!!! */ { TypeFromGqlSchema } from '@/types/my-graphql-types'
and y’re good.
OAuth2 along with a managed service (there are others besides Cognito) are quite handy. Guess at some point we need to tweak things back to old-school sessions - namely when quantum computing makes JWT as such too insecure. In any case, security is tough and getting tougher by the day. A managed service as shown above offloads most of it to a team at AWS that has more bandwith to deal with it.
]]>First question I did ask myself is: “Why are details not enough?” Everyone living (or rather suffering) through a ticket-based workstyle can probably relate: sth is missing and that sth is context. There are a thousand little micro-decisions in every dev effort - and devs can get them right or wrong. I’d argue the odds for right improve quite dramatically with more context - i.e. a good briefing. A little more than 200 years ago, Clausewitz stressed aspects like objective, will and friction in On War (highly recommended read btw). And as “there was nothing new under the sun” (J. Mattis, Call Sign Chaos - another highly recommended read), every project today can benefit.
Yet at the same time, winging it rarely cuts it. And it takes up a lot of mental bandwith.
So: here’s my stab at a template to create briefings
You can be the the judge of whether I succeeded of course - I tried to condense some good reading into my take:
Anyhow: here’s my take as in a template to fill by asking important questions:
(that’s it)
The most important part is the first part (Getting a lay of the land) - and that should not take longer than 10 minutes. If you only have 10 minutes, that’s the one. Still, spending some more time (an hour tops) on laying out the solution proved powerful to me. All-in-all, even with little time involved, you might get a lot more clarity for everyone. Plus: it’s time you don’t have to spend explaining over and over again.
As ever: I hope this helps you a little and you found this useful. Sure lmk what you think!
]]>querySelector
-like API to extract content (this assumes Vue3 with Vite).
There’s several alternatives to testing visual components:
unexpected-dom
(or JSDOM)I found the last one the most effective - as it’s the minimal, most straightforward solution. And the querySelector
-like syntax works for most actual scenarios. Plus we can still use findComponent
to check props of children.
Vue test-utils allows rendering a component with given props to HTML. I do prefer shallow rendering, i.e. sub-components just appear as <sub-component></sub-component>
and not rendered. This keeps the scope of tests small enough. Here’s how that works - given the following rendered example:
<div>
<table>
<tbody>
<tr>
<td>
Test-Sth
<small>subtext</small>
</td>
<td>
Sth else
</td>
</tr>
</tbody>
</table>
<my-sub></my-sub>
</div>
Now w/ in any test method (it('what', () => { /* do some testing */})
), one can use:
const wrapper = mount(MyComp, { shallow: true, props: {myprop: 'myvalue'} });
expect(wrapper.find('table td:nth-child(1)').text()).toMatch(/Test-Sth.*subtext/);
expect(wrapper.find('table td:nth-child(1)').html()).toMatch(/Test-Sh.*<small.*subtext<\/small>/);
So it’s possible to drill down to what we really want to test with everything CSS has got (and that is a lot) - and then write specific tests.
Components might require (or import) things, esp. stores (which are great) - and we can mock those out in the usual way (vitest and jest are pretty similar here, as well):
vi.mock('../../stores/my-store', () => {
const useMyStore: any = () => useMyStore; // return oneself - also allow access to mock functions in later assertions
useMyStore.loadSth = vi.fn();
useMyStore.sth = [{id: 42, title: 'Test-Sth', /*...*/}]
return { useMyStore };
});
Now, the component will get a mock instead of the real dependency for the scope of the test. As we’re using JS’ duckt-typing, we can even drill down into objects returned from constructor functions (like defineStore
is).
Assuming the component needs to call loadSth
on mounted, we can now test whether the invocation happened:
expect((useMyStore as any).loadOwned).toHaveBeenCalled(); // (no params on this one, but you get the idea)
We can also use findComponent
(see API Reference) et al to check whether sub-components get the right properties. Here’s an example:
const mySub = wrapper.findComponent(MySub);
expect(mySub.props('foo')?.bar).toBe(42);
If we have computed props and similar, calling
await wrapper.vm.$nextTick();
might make a ton of sense as we give Vue a chance to compute and poss. re-render all we have.
…even though it’s only second preferennce, there is a quite pragmatic approach to testing single components in the Vue ecosystem.
Hope you found this useful - as ever: let me know what you think!
]]>So the short answer to the title is a resounding YES! CodeWhisperer is definitely useful. Even in ways I would not have thought. My learning here is: it’s not just about repetition (copy and paste is quite efficient there), it’s also about variation.
Let me explain the last part a little: the more often one of
change, the less every detail is in muscle memory already. And let’s face it: we won’t work on one codebase for 10 years (event 10 months is not a given). And even if it is the same codebase, we bring new stuff / new frameworks in. So something, inevitably always changes.
Which means sth, every week, is my first time. Reading up on all details is certainly slower than writing a comment and pressing option+C. Maybe it’s not perfect - but my very first attempt sure AF isn’t, either.
So, my argument here is quite simple: CodeWhisperer is super-useful, though not for what I initially thought.
]]>Taking heavy lifting out of projects is not really new (see parent POM, archetypes, rails scaffolding, yeoman, many more) - but nonetheless powerful. It gets even more important in the cloud where an individual project can be quite small (like one lambda).
The core idea is pretty simple: instead of creating package.json
and other files oneself, .projenrc.ts
is creating the file and applies whatever we give in as typescript (or python or …). As all is just plain typescript, it’s simple to have logic, spawn several subprojects (like for each lambda) - and to re-use code and defaults.
const project = new awscdk.AwsCdkTypeScriptApp({
cdkVersion: '2.1.0',
defaultReleaseBranch: 'main',
name: 'projentest',
projenrcTs: true,
github: false,
// deps: [], /* Runtime dependencies of this module. */
// description: undefined, /* The description is just a string that helps people understand the purpose of the package. */
// devDeps: [], /* Build dependencies for this module. */
// packageName: undefined, /* The "name" in package.json. */
});
project.synth();
The above is all it really takes to create a CDK project (see further down for a node-based Lambda). Based on this, npx projen
creates a whole bunch of files (incl. gitignore) with sensible defaults. (Caveat: npm i -g yarn
needs to be run before the first npx projen
) Via options and via project.add<sth>
, the generation process can also be customized.
There is also no need to limit oneself to defaults - here is how to create an own class with defaults:
export class MyLambdaProject extends typescript.TypeScriptAppProject {
super(options);
this.addDevDeps('@aws-sdk/client-dynamodb@^3.500.0', '@aws-sdk/client-s3');
// (more, see also below)
It can then be used as follows:
const lambda1 = new MyLambdaProject({
entrypoint: 'src/index.ts',
defaultReleaseBranch: 'main',
parent: project,
outdir: 'functions/lambda1',
name: 'lambda1',
// (more if we want)
});
lambda1.synth();
So it’s really just plain typescript. Plain typescript could be applied to e.g. loop over needed Lambdas.
Above example also shows subprojects - i.e. in my case, I have CDK as parent project and lambdas as children - in the same (mono)repo. Point is: all lambdas have the same defaults (in this case: dependencies added on top of what we configure).
By the way: the <dependency>@<version>
notation is optional, but it allows fixing versions if we really want it.
If monorepo is not the way to go, there’s other ways to distribute the MyLambdaProject
class, e.g. a private NPM repo or a postinstall action cloning it to the local code or even a git submodule (though, to me, as a pragmatist, the latter seems overkill).
projen generates files upon npx projen
. That’s the standard files like package.json
and more - but it can also be any custom files. There’s two types:
SampleFile
is only generated when it does not yet exist - can can be overwritten by devs afterwards. Useful e.g. for an empty lambda code fileTextFile
and other managed files (as projen calls it) are overwritten every time. projen is (btw) smart enough to create those write-protected so VSCode and others make it clear that editing is not a good idea.Here’s a sample of both:
export class MyLambdaProject extends typescript.TypeScriptAppProject { // could be a git module!
constructor(options: typescript.TypeScriptProjectOptions) {
super(options);
// (more stuff)
// add one static asset
new SampleFile(this, 'sample.txt', {
contents: 'Schmu123', // stays
});
// and one that updates
new TextFile(this, 'managed.txt', {
lines: 'Schmu123'.split(/[\r\n]+/), // gets overwritten
});
}
}
Disclaimer upfront: I’m still learning the latter as well. From all I see, the ideas are very similar indeed - add sth from a template and be able to update from the template (CodeCatalyst calls this lifecycle management). Sure the people involved were talking ;-) So, much can be achieved with both. When using blueprints, though, one is tying the knot with CodeCatalyst. Individual decision whether that makes sense.
projen is a quite useful new take on a well-established idea. It’s hands-on and simple - and just allows to have many similar projects at almost zero effort for individual teams. There was not much point in monolith codebases - but with the cloud becoming the platform and framework more and more, this is getting really handy.
As ever: hope you found this useful - let me know what you think!
]]>I’d say: because you need a ton of mental bandwith to get anything done. Imagine just adding a small comment in Con**nce or one of the office suites: find the right page or doc, find the right part of it, switch to edit mode (and find the right part again), add sth small, “Your token explired” and so on and so on and so on. When all of this is done, you can essentially start over with whatever else you were really doing.
I proposed a solution for this a while ago called debriefit (can try live here) - and didn’t get much love, tbh. BUT: seems like I was not alone with the idea of quickly looking up the right part and quickly adding sth small.
What obsidian offers is the Meta+O command that has a quick search for every doc - so jumping somewhere is quick and does not use a lot of mental bandwith. So one issue solved.
Secondly, doing sth trivial like reporting time spent on a task is a black hole for mental bandwith just the same. The spreadsheet I use (already a really great one) shows 60 (sixty) icons on top of a sheet. And you still need to select project again, and task type again, and date, and so on. Over time, I discovered many features that really makes things easier - but I know of very few people using even 10% of what’s in there.
Likewise: if your project desicides to use J**a or tools like that, you wish the 60 icons back. Especially the most (in)famous tool seems to have a contest going for making things hard. It sucks up all mental bandwith (at least in my opinion) for sth as trivial as “I’ve worked 5h on this today”.
As quickly mentioned above, I think two features stand out:
1: Quickly opening any file with Meta-O (similar to Meta-P in VSCode) - make jumping to a doc easy - allowing inserting a small hint without big overhead. I find that often, small comments like “slowdown might be due to this queue; steps to troubleshoot: 1, 2, 3”
2: Editing properties that are machine readable (Meta-P lets you insert properties - and also remembers property names already used). This makes logging hours et al very easy.
Aside from that, the fact that one can even check in content and trace (just like code; little optimizations apply) makes a ton of sense as well. Having used plain files on github before, this is just one more step. When using it for business, the only disadvantage vis-a-vis plain github: it costs a bit of money.
Having used Jekyll for almost a decade, properties are just normal - every Jekyll post has properties for layout, title, keywords, etc. So does Rmarkdown. Take this post:
layout: post
title: "Tracking without hassle - finally - with properties in Obsidian"
date: 2024-02-03 18:00:00 +0100
categories: markdown pm management reporting controlling kpi obsidian
(by the way: most old-school office suites also have “custom document props” - just not the most widely-used feature)
Obsidian also supports those markdown-style properties.
Now consider the following example: basic facts on a project are listed as markdown properties:
./Feature I/Main.md:
---
estimate: "40"
status: Ongoing
---
some more detail
./Feature I/Hours/SRo Feb 03.md:
---
timespent: "7"
date: 2024-02-03
---
Did the basics now%
./Feature I/Hours/SRo Feb 04.md:
---
timespent: "5"
date: 2024-02-04
---
Added some more
All it takes now is a very simple script - like the following one - to pull reports
#!/bin/bash
echo '---' > Stats.md
echo -n 'estimate: ' >> Stats.md
grep -r 'estimate:' . | grep '.md' | grep -v Stats.md | grep -v grep | awk -F: '{gsub("\"", "", $3); sum = sum + $3} END {print sum}' >> Stats.md
echo -n 'timespent: ' >> Stats.md
grep -r 'timespent:' . | grep '.md' | grep -v Stats.md | grep -v grep | awk -F: '{gsub("\"", "", $3); sum = sum + $3} END {print sum}' >> Stats.md
echo '---' >> Stats.md
which results in sth like:
Stats.md:
---
estimate: 40
timespent: 12
---
Of course, doing more (like in node or Python) makes it even more powerful. Yet: sth as simple as above is enough. Which does not only make documentation a ton easier - but also allows for reports being pulled easily.
With Gen-AI being all the craze, just firing off “Claim 5h on this task in this project” (for me, today) will be the way to go. Taking a bit of prompt engineering to sniff out correct values for project and task, that looks very feasible.
Besides, as a super-simple option, just checking plain markdown into github can also work. Properties are available in markdown throughout - and search is a feature of IDEs throughout, including VSCode with Meta-P (yes, I know - almost the same). So I could have just written up “use VSCode to check markdown with properties to github”. And that’s a legit option. All scripts apply just the same.
VSCode has a snippets feature (see this Introduction) that allows for almost fully automating markdown properties - like timesheets. No tools needed outside VSCode, and it’s fast.
Sticking with the timesheet example, you can create a file .vscode/snippets.code-snippets
in your project and give it a content like this one:
{
"Timesheet": {
"scope": "markdown",
"prefix": "timesheet",
"description": "Timesheet entry",
"body": [
"---",
"timespent: \"0\"",
"date: 2000-00-00",
"---",
""
]
}
}
Now, in any mardown file, you can type time
(or timesheet
), press CTRL+SPACE and choose the snipped from the IntelliSense.
There’s a lot of value in markdown properties (allowing for automated processing of info) + good search. Obsidian is a way, much can be emulated with plain VSCode and github as well. In any case, it’s a massive improvement vis-a-vis “legacy” tools.
Feel free to try it out - and as ever: let me know what you think!
]]>The core of the idea is to override window.fetch
only during testing. As fetch returns a promise, we have to replace the return with our promise so we know when the result is there. Same goes for methods like res.json()
- which again we have to wrap. Finally, let’s jump to the end of the queue one more time (via setTimeout
) - so that any promise handler likely fired before we assume all is finished.
You can find the complete implementation on github. You can start checking from this part:
// a lot more detail
window.__hasNoOngoingFetch = function() {
return __openFetchPromises === 0;
};
// a lot more detail
window.fetch = __instrumentPromise(window, window.fetch, 'fetch');
It’s still alpha and only for testing anyway - but checking this in playwright yields a pretty great reasult already:
// initializing interaction
await page.waitForFunction(() => (window as any).__hasNoOngoingFetch()); // so much better
// we can be sure we're done
You find the complete example on github as well.
Again: this is very much still alpha - but I wanted to share the idea regardless; let’s see how this shakes out…
Hope you find it useful & let me know what you think!
]]>First of all: let’s define the problem we’re trying to solve: keeping several parts of the UI in sync. Sometimes (like with form-based apps), that’s just not a thing. Other times (like in e-commerce), that is very much relevant - like updating the items in a shopping cart, which is done by a “buy” button far away from the cart symbol far away from a “related” box. Some might go hardcore and really have several builds with several scripts - pot. even in several frameworks and bound together by SSI - one realization of the idea of “micro frontends”. But even if we stay grounded and just have one frontend (and one build): there’s still several components to be kept in sync.
The obvious answer is using a state framework à la Redux, Vuex, NgRx. And when you start of with that (and it fits the use case): great! You’re all set.
What I want to do here is show an easy way to do it in environments where revamping state management is not an option.
In short, it’s
Of course, you can just implement this yourself, no issue. I also did this a couple times. When anyhow using RxJS (a given when you’re in the Angular world), you even get it for free - via BehaviorSubject
I’ve put together this CodePen with a full example.
You can instantiate a BehaviorSubject from RxJS with an initial value:
var bs = new rxjs.BehaviorSubject(0);
Then, you can get the value via bs.value
and can update it via bs.next(newValue)
. Any part of the code can subsribe an Observer via bs.subscribe((nextValue => doSth...)
. And that’s it, really - no matter how many places you have where you need an up-to-date value: you got it.
Hope you find it useful & let me know what you think!
]]>Hitting the back button at the wrong time in phpMyAdmin sure does not produce what you normally want (still great and helpful tool, though) - esp. as your carefully crafted query is, most likely, just lost. None of the alternative clients (all downloaded full-blown apps) did convince me - so before sitting down and starting writing (w/ little time at hand a longer journey) I actually ‘found’ sth I had already installed anyway: RStudio.
There’s comprehensive database support - for MariaDB, that would be RMariaDB as well as recall, history, post-processing, even creating nice reports.
So, here’s how to use one of my favorite apps (RStudio) as SQL client…
First of all, it’s simply
install.packages("RMariaDB")
RMariaDB offers a crisp intro with all important commands. Your setup might be a little different - on my Mac, the dev DB needs an explicit user (&pw), otherwise, a
library(DBI)
con <- dbConnect(RMariaDB::MariaDB(), user='magento', password='somethingverysecret')
dbSendQuery(con, 'use magento230')
# ready for queries
is all it takes to get started.
res <- dbSendQuery(con, 'select * from eav_attribute')
df <- dbFetch(res, n = 5)
dbClearResult(res)
selects up to five rows from eav_attribute
(a table in Magento, the e-commerce solution); same goes for any other query. df
is a vanilla R data frame.
When doing EDA (as in Exploratory Data Analysis), being sloppy about my one DB connection is OK for me (yet hoping backend devs are very different when writing production code). That sloppiness lets me create shortcuts like
df <- dbFetch(dbSendQuery(con, 'explain eav_attribute'))
along with the View(resdetail)
(or clicking resdetail in the Environment tab of RStudio) gets me details on the eav_attributes
table of Magento.
What makes one-liners so efficient is recall: you can try variations - and also get a previous query back from the History tab of RStudio really quickly.
Along with limit
(at least in MariaDB), there is paging - like:
df <- dbFetch(dbSendQuery(con, 'select * from eav_attribute limit 5,10'))
which skips five rows and returns the 10 after that.
There are warnings about the previous query being cancelled - but again: we’re in EDA.
What’s nice about RStudio is that each View(dataframe)
opens a new data tab with contents - and that tab auto-updates each time the data frame updates. So running several queries in reacall and viewing results instantly is super-simple.
The above
df <- dbFetch(dbSendQuery(con, 'explain eav_attribute'))
retrieves the metadata on eav_attribute
(could be any other table) while
df <- dbFetch(dbSendQuery(con, 'show tables'))
returns a list of all tables.
… I didn’t know amazing R(Studio) can be used as SQL client (for routine queries, for EDA, for anything really) - but now I do - and I love it! As ever, hope you find it useful & let me know what you think!
]]>That said, there’s some drawbacks:
Given one wants to use a CDN at least for common libraries (it’s perfectly possible to still package more exotic ones into the own bundle), it needs some setup for webpack. Furtunately, this is quite straightforward - and possible not only for JavaScript resources but also for CSS or fonts (at least with a little hack).
The main element to achieve using CDN is Externals which allows any named import (or require) to resolve against an object in the global namespace instead of node_modules. The assumption is that importing a library via CDN will have placed this object there before the code runs. So, in order to get react from CDN, it takes these steps:
First, placing a reference to react from CDN into the head or body of the template (latter only when webpack also inserts at the end of body). Can do this well right away by adding CORS and SRI
<script language="javascript" type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/react/16.4.2/umd/react.production.min.js" crossorigin="anonymous" integrity="sha256-2EQx5J1ux3sjgPLtDevlo449XNXfvEplcRYWIF6ui8w="></script>
To compute the checksum, there’s the quick and dirty way of putting in anything and check DevTools for the right one - or a clean way via a unix-like shell (thanks to superuser for help on converting hex to base64)
curl 'https://cdnjs.cloudflare.com/ajax/libs/react/16.4.2/umd/react.production.min.js' | shasum -a 256 | xxd -r -p | base64
All of this assumes we have a template HTML file like indexcdn.html specified in the webpack config, e.g. have sth like the following
module.exports = {
plugins: [
new HtmlWebpackPlugin({
template: "public/indexcdn.html",
// (more)
}),
// (more)
],
// (more)
}
Second, react needs to be defined as an external in the webpack config
module.exports = {
externals: {
'react': 'React',
// (more)
},
// (more)
}
This assumes above <script>
did create a global React
(a.k.a. window.React
) which we can use now: every require('react')
or import react
will now return this window.React
.
(at least as long as it’s not modules)
For imported CSS like Bootstrap or FontAwesome, one can do the same. The import of the style does not actually use whatever is returned by the import (rather, it triggers the style being added to the head), like
import 'font-awesome/css/font-awesome.css';
so, what does work (though a tiny little bit hacky) is the following webpack config
module.exports = {
externals: {
'font-awesome/css/font-awesome.css': 'window',
// (more)
},
// (more)
}
(the returned window
is just discarded - all it takes is any value that is always available globally, and window
sure is)
In addition to that, it again requires font-awesome to be included from the CDN in the head of the template HTML (like indexcdn.html):
<link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css" crossorigin="anonymous" integrity="sha256-eZrrJcwDc/3uDhsdt61sL2oOBY362qM3lon1gyExkL0=" />
CORS and SRI work the same as before; as webpack never actually descends into the FontAwesome CSS, all the webfont files will also be loaded from CDN.
The same method will not work when using CSS modules (i.e. really importing style names from a CSS and have prefixes with it). But on the other hand: these styles will most certainly be one’s own (and are loaded from one’s own bundle) anyway.
These steps are all it takes, really, to get a CDN working with webpack. As ever, hope you find it useful & let me know what you think!
]]>