<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Will Vincent]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://willvincent.com/</link><generator>Ghost 5.44</generator><lastBuildDate>Thu, 09 Apr 2026 21:53:31 GMT</lastBuildDate><atom:link href="https://willvincent.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Interview Code Challenges Are Testing the Wrong Thing]]></title><description><![CDATA[Most coding challenges test what we remember, not how we think.]]></description><link>https://willvincent.com/2026/01/27/tech-interviews-are-testing-wrong-thing/</link><guid isPermaLink="false">697912e2f5bda1000178a722</guid><category><![CDATA[tech industry]]></category><category><![CDATA[professional growth]]></category><category><![CDATA[career]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Tue, 27 Jan 2026 20:56:43 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1707585381675-4598140bbbfe?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIwfHxhbnRpcXVhdGVkfGVufDB8fHx8MTc2OTU0Njg1NXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1707585381675-4598140bbbfe?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIwfHxhbnRpcXVhdGVkfGVufDB8fHx8MTc2OTU0Njg1NXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Interview Code Challenges Are Testing the Wrong Thing"><p>When I was in college, I was required to take a Java course. I wasn&apos;t particularly interested in Java but nothing was particularly <em>difficult</em> about the class except the tests.</p><p>Not because I didn&apos;t understand the questions or logic flow, but it was administered in the way one might a test for a foreign language. I mean an actual <em>spoken</em> language, not [random <em>programming</em> language].</p><p>Specifically, we were given closed-book tests in which we were expected to hand-write code.</p><p>On paper.<br>With a pencil.<br>Scored on syntax. </p><p>To my mind, this wasn&apos;t testing the ability to think like a developer, but rather the ability to memorize syntax of the Java language. At the time AI didn&apos;t exist, and even auto-complete in the IDE wasn&apos;t a thing, and syntax <em>does</em> matter, but it felt disconnected from how software is actually built.</p><p>When it comes to writing code, specific domain knowledge of the <em>syntax</em> ranks fairly low on the scale of what&apos;s actually important. Developers constantly lean on other tools; Google, docs, books, and today AI.</p><p>If the job is software development, determining that the applicant <em>can</em> write code is worthwhile, but expecting them to sit down and write some arbitrary code without allowing reference to external sources especially under extra, artificial pressure of being watched within interview time constraints, is... unreasonable.</p><p>Speed is important.<br>Correctness is important.<br>Knowing and recognizing design patterns is important.</p><p>These all lead to stable, performant, maintainable code. </p><figure class="kg-card kg-image-card"><img src="https://willvincent.com/content/images/2026/01/yell-at-cloud.webp" class="kg-image" alt="Interview Code Challenges Are Testing the Wrong Thing" loading="lazy" width="360" height="203"></figure><p>Call me an old man yelling at clouds if you like, but the job was never <em>about</em> <strong>writing code</strong>. It has always been about <strong>solving problems</strong>. Real developers use references constantly. Knowing when to reach for reference material, and where to look is much more important than memorization. Nobody ships in total isolation.</p><p>You could say that writing the code has been a bottleneck in the process. Modern tools - &#xA0;autocomplete, LSPs, LLMs, etc - can reduce the bottleneck of typing, and shift focus to where it always belonged in the first place: <em>Thinking</em>.</p><p>AI doesn&apos;t replace engineers. It replaces the need to memorize. Even if you are not going to allow employees to use AI on the job, which frankly seems almost as nonsensical as trying to one-shot &quot;vibe code&quot; an application with AI, expecting someone to write functional code by hand, with no reference material is even more antiquated today than it was over two decades ago.</p><p>Instead, interviews should be a conversation, not a test. They should focus on how the candidate thinks about a problem. What they can determine from the specifications provided, what they would do about the portions that may be unclear or that they&apos;re unsure of.</p><p>Do they ask questions?<br>Do they think about edge cases?<br>Can they explain their approach to understanding and solving the problem?<br>If they can think of more than one way to approach it, can they articulate tradeoffs?<br>Can they adapt if requirements change?<br>Can they spot risks that might lead to stability or scalability issues?</p><p>I&apos;m less interested in their ability to recall syntax, or regurgitate design patterns off the top of their head, than I am how they reason about a problem that needs to be solved.</p><p>Far too often interviews are disconnected from the reality of the job, with code challenges rewarding those who have recently interviewed, those who grind on &quot;<em>leetcode&quot;</em> or otherwise optimize for artificial constraints, completely overlooking judgement, thinking in <em>systems, and most importantly</em> real-world experience.</p><p>I&apos;ve met and worked with incredible developers who <em>suck</em> at live coding, and mediocre ones who ace it.</p><p>Interviews should more closely resemble the job - not arbitrary code challenges or algorithm-puzzle style &quot;trick question&quot; nonsense. Most people never need to manually invert binary trees or implement their own sorting algorithms. These are <em>solved problems</em> that exist in most languages standard libraries or community code.</p><p>How candidates think about problems, communicate, and collaborate are vastly more important than if they can remember whether the haystack or needle comes first when searching an array or list, or immediately recall which design pattern applies to a <em>specific</em> problem. On the job, it&apos;s more often the case that the solution isn&apos;t actually a known target than a puzzle to unlock, but the latter is what is usually tested in interviews.</p><p>When I interview candidates for development roles I try to get a solid sense of how they think, and most importantly if they&apos;re someone I could see myself working with on a daily basis. For the most part, everything else can be learned.</p><p>In the age of AI-assisted development, this is even more relevant than ever, and interviews probably ought to adapt accordingly.</p>]]></content:encoded></item><item><title><![CDATA[What Job Searches and SaaS Marketing Have in Common]]></title><description><![CDATA[People struggle with job searches for the same reason SaaS founders struggle with marketing: great at execution, terrible at explaining value.]]></description><link>https://willvincent.com/2026/01/19/what-job-searches-and-saas-marketing-have-in-common/</link><guid isPermaLink="false">696e95dbf5bda1000178a6b7</guid><category><![CDATA[personal branding]]></category><category><![CDATA[marketing]]></category><category><![CDATA[saas]]></category><category><![CDATA[tech industry]]></category><category><![CDATA[professional growth]]></category><category><![CDATA[career]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Mon, 19 Jan 2026 21:10:01 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1616010650868-e8c1583f6b0d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE4fHxqb2IlMjBzZWFyY2h8ZW58MHx8fHwxNzY4ODU1MTQ5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1616010650868-e8c1583f6b0d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE4fHxqb2IlMjBzZWFyY2h8ZW58MHx8fHwxNzY4ODU1MTQ5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="What Job Searches and SaaS Marketing Have in Common"><p>I see a lot of posts on X lately saying some version of: <em>&#x201C;The hardest part of building a SaaS isn&#x2019;t the product, it&#x2019;s the marketing.&#x201D;</em></p><p>I agree with that statement, to a point. I also think it reveals something deeper, and a little uncomfortable.</p><p>The same problem shows up in job searches.</p><p>People talk about the job market as if it&#x2019;s uniquely broken, uniquely hostile, or uniquely unfair right now. And yes, the market <em>is</em> challenging. There are real macro forces at play. But I don&#x2019;t think finding a job is inherently harder than <em>doing</em> a job, any more than marketing a SaaS is inherently harder than <em>building</em> one.</p><p>They&#x2019;re just different skill sets.</p><p>And many (dare I say <em>most</em>) people are very good at one, and absolutely terrible at the other.</p><h3 id="execution-vs-marketing-is-a-real-divide">Execution vs. marketing is a real divide</h3><p>Developers are a great example.</p><p>Many engineers can implement complex systems with ease. They can design architectures, write clean code, ship features, debug production issues, and keep things running under pressure. Execution is their comfort zone.</p><p>But ask them to explain <em>why</em> what they built matters, to a customer, a hiring manager, or a non-technical stakeholder, and things often fall apart.</p><p>The value is there. The impact is there. The skill is there.</p><p>The <em>story</em> is missing.</p><p>The same thing happens in job searches.</p><p>You can be excellent at your job, consistently delivering, solving hard problems, making teams better, and still struggle mightily to land your next role. Not because you&#x2019;re bad at what you do, but because you don&#x2019;t know how to market yourself in a way that creates interest, confidence, and momentum.</p><h3 id="marketing-isn%E2%80%99t-lying-even-though-people-treat-it-like-it-is">Marketing isn&#x2019;t lying even though people treat it like it is</h3><p>A lot of technically strong people have an almost moral resistance to marketing.</p><ul><li>Marketing feels &#x201C;salesy.&#x201D;</li><li>Marketing feels inauthentic.</li><li>Marketing feels like exaggeration or spin.</li></ul><p>So instead of learning how to do it well, they <em>avoid it entirely.</em></p><p>But good marketing <em>isn&#x2019;t</em> lying. It&#x2019;s translation.</p><p>It&#x2019;s taking something real and valuable and expressing it in a way that resonates with the audience you&#x2019;re trying to reach.</p><p>In SaaS, that audience is customers.<br>In a job search, that audience is hiring managers, recruiters, and decision-makers.</p><p>In both cases, the failure mode is the same:<br><em>&#x201C;I built something great. Why doesn&#x2019;t anyone care?&#x201D;</em></p><h3 id="being-great-at-the-work-is-not-enough">Being great at the work is not enough</h3><p>This is the hard truth a lot of people don&#x2019;t want to hear:</p><p>Being good at execution is table stakes.</p><p>It always has been.</p><p>Companies don&#x2019;t hire <em>potential value</em>. They hire perceived value.<br>Customers don&#x2019;t buy <em>technical merit</em>. They buy outcomes, clarity, and confidence.</p><p>If you can&#x2019;t clearly articulate:</p><p>what you do,</p><p>why it matters,</p><p>and why <em>you</em> are the right person to do it,</p><p>then you&#x2019;re relying on hope instead of strategy.</p><p>And <em>hope</em> is not a <em>plan</em>. Whether you&#x2019;re launching a product or looking for a job.</p><h3 id="the-same-pattern-over-and-over">The same pattern, over and over</h3><p>When I zoom out, I see the same dichotomy everywhere:</p><ul><li>Builders who can ship but can&#x2019;t sell</li><li>Operators who deliver but can&#x2019;t pitch</li><li>Experts who assume the work should speak for itself</li></ul><p>In reality, the work rarely speaks unless you teach it how.</p><p>That doesn&#x2019;t mean everyone needs to become a marketer. But it <em>does</em> mean that if you want leverage (customers, jobs, opportunities) you need to develop at least a functional level of marketing skill.</p><p>Not hype.<br>Not bullshit.<br>Just clear, compelling communication.</p><h3 id="the-job-search-is-a-marketing-problem">The job search is a marketing problem</h3><p>For many people, the job search isn&#x2019;t failing because they&#x2019;re unqualified.</p><p>It&#x2019;s failing because they&#x2019;re invisible, unclear, or indistinguishable.</p><p>Resumes list responsibilities instead of impact.<br>Interviews focus on tasks instead of outcomes.<br>Online profiles read like internal documentation instead of value propositions.</p><p>That&#x2019;s not a talent problem. It&#x2019;s a marketing gap.</p><p>And just like with SaaS, the people who close that gap, even imperfectly, tend to win disproportionately.</p><p>If you&#x2019;re struggling to sell a product, or struggling to land a role, it might be worth asking a hard question:</p><p><em>&#x201C;Am I actually bad at this&#x2026; or am I just bad at <strong>explaining why it matters?</strong>&#x201D;</em></p><p>In both SaaS and careers, execution gets you <em>in the game</em>.<br>Marketing is what moves the ball.</p><p>Ignoring that doesn&#x2019;t make you principled.<br>It just makes you harder to find.</p>]]></content:encoded></item><item><title><![CDATA[Converting oklch to hex colors]]></title><description><![CDATA[<p>oklch is a great addition to css, providing much better control over color modifications, support for wider gamut, etc - reasons I&apos;m not going to go into in this post.</p><p>But... sometimes you still might need, or want, good old hex colors. Email templates immediately come to mind.</p>]]></description><link>https://willvincent.com/2025/12/01/converting-oklch-to-hex-colors/</link><guid isPermaLink="false">692e21a7f5bda1000178a680</guid><category><![CDATA[PHP]]></category><category><![CDATA[Javascript]]></category><category><![CDATA[Programming]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Mon, 01 Dec 2025 23:26:24 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1511980725567-b9b7f0358905?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDR8fHBhaW50JTIwcGFsbGV0dGV8ZW58MHx8fHwxNzY0NjMwOTM4fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1511980725567-b9b7f0358905?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDR8fHBhaW50JTIwcGFsbGV0dGV8ZW58MHx8fHwxNzY0NjMwOTM4fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Converting oklch to hex colors"><p>oklch is a great addition to css, providing much better control over color modifications, support for wider gamut, etc - reasons I&apos;m not going to go into in this post.</p><p>But... sometimes you still might need, or want, good old hex colors. Email templates immediately come to mind.</p><p>So, if you happen to be collecting client branding colors in your application, for instance, and you&apos;re storing them as oklch instead of hex values, you&apos;ll need a way to convert to hex to inject the right value into your email template. Or whatever other use cases you can come up with.</p><p>If you&apos;re just doing one-off conversions, go ahead and visit <a href="https://oklch.com">oklch.com</a>. But, if you need it dynamic, you might want some PHP code to do the conversion automatically for you.</p><p>I got you, fam:</p><!--kg-card-begin: markdown--><pre><code class="language-php">&lt;?php

/**
 * Convert OKLCH string to Hex
 * @param string $oklchString - OKLCH color string (e.g., &quot;oklch(70% 0.15 30)&quot; or &quot;oklch(0.7 0.15 30deg)&quot;)
 * @return string Hex color string (e.g., &quot;#ff5733&quot;)
 * @throws InvalidArgumentException
 */
function oklchToHex(string $oklchString): string
{
    // Parse OKLCH string
    if (!preg_match(&apos;/oklch\s*\(\s*([0-9.]+)%?\s+([0-9.]+)\s+([0-9.]+)(?:deg)?\s*\)/i&apos;, $oklchString, $match)) {
        throw new InvalidArgumentException(&apos;Invalid OKLCH string format&apos;);
    }
    
    $l = (float) $match[1];
    $c = (float) $match[2];
    $h = (float) $match[3];
    
    // Convert percentage lightness to 0-1 range
    if (str_contains($oklchString, &apos;%&apos;)) {
        $l *= 0.01;
    }
    
    // Convert OKLCH to OKLAB
    $hRad = deg2rad($h);
    $a = $c * cos($hRad);
    $b = $c * sin($hRad);
    
    // Convert OKLAB to linear LMS (corrected matrix)
    $l_ = $l + 0.3963377774 * $a + 0.2158037573 * $b;
    $m_ = $l - 0.1055613458 * $a - 0.0638541728 * $b;
    $s_ = $l - 0.0894841775 * $a - 1.2914855480 * $b;
    
    $l3 = $l_ * $l_ * $l_;
    $m3 = $m_ * $m_ * $m_;
    $s3 = $s_ * $s_ * $s_;
    
    // Convert linear LMS to linear RGB (corrected matrix values)
    $r = 4.0767416621 * $l3 - 3.3077115913 * $m3 + 0.2309699292 * $s3;
    $g = -1.2684380046 * $l3 + 2.6097574011 * $m3 - 0.3413193965 * $s3;
    $b_rgb = -0.0041960863 * $l3 - 0.7034186147 * $m3 + 1.7076147010 * $s3;
    
    // Convert linear RGB to sRGB
    $r = linearToSrgb($r);
    $g = linearToSrgb($g);
    $b_rgb = linearToSrgb($b_rgb);
    
    // Clamp and convert to 8-bit
    $r = max(0, min(255, (int) round($r * 255)));
    $g = max(0, min(255, (int) round($g * 255)));
    $b_rgb = max(0, min(255, (int) round($b_rgb * 255)));
    
    // Convert to hex
    return sprintf(&apos;#%02x%02x%02x&apos;, $r, $g, $b_rgb);
}

/**
 * Convert linear RGB value to sRGB
 * @param float $val
 * @return float
 */
function linearToSrgb(float $val): float
{
    // Clamp to valid range first
    $val = max(0, min(1, $val));
    
    if ($val &lt;= 0.0031308) {
        return 12.92 * $val;
    }
    return 1.055 * pow($val, 1 / 2.4) - 0.055;
}

// Example usage:
echo oklchToHex(&apos;oklch(70% 0.15 30)&apos;) . PHP_EOL;      // Warm orange
echo oklchToHex(&apos;oklch(0.5 0.2 200deg)&apos;) . PHP_EOL;   // Blue
echo oklchToHex(&apos;oklch(90% 0.05 120)&apos;) . PHP_EOL;     // Light green
echo oklchToHex(&apos;oklch(0.3 0.1 300deg)&apos;) . PHP_EOL;   // Dark purple
</code></pre>
<!--kg-card-end: markdown--><p>Prefer JS? No worries:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">/**
 * Convert OKLCH string to Hex
 * @param {string} oklchString - OKLCH color string (e.g., &quot;oklch(70% 0.15 30)&quot; or &quot;oklch(0.7 0.15 30deg)&quot;)
 * @returns {string} Hex color string (e.g., &quot;#ff5733&quot;)
 */
function oklchToHex(oklchString) {
  // Parse OKLCH string
  const match = oklchString.match(/oklch\s*\(\s*([0-9.]+)%?\s+([0-9.]+)\s+([0-9.]+)(?:deg)?\s*\)/i);
  
  if (!match) {
    throw new Error(&apos;Invalid OKLCH string format&apos;);
  }
  
  let l = parseFloat(match[1]);
  let c = parseFloat(match[2]);
  let h = parseFloat(match[3]);
  
  // Convert percentage lightness to 0-1 range
  if (oklchString.includes(&apos;%&apos;)) {
    l = l / 100;
  }
  
  // Convert OKLCH to OKLAB
  const hRad = (h * Math.PI) / 180;
  const a = c * Math.cos(hRad);
  const b = c * Math.sin(hRad);
  
  // Convert OKLAB to linear LMS
  const l_ = l + 0.3963377774 * a + 0.2158037573 * b;
  const m_ = l - 0.1055613458 * a - 0.0638541728 * b;
  const s_ = l - 0.0894841775 * a - 1.2914855480 * b;
  
  const l3 = l_ * l_ * l_;
  const m3 = m_ * m_ * m_;
  const s3 = s_ * s_ * s_;
  
  // Convert linear LMS to linear RGB
  let r = 4.0767416621 * l3 - 3.3077115913 * m3 + 0.2309699292 * s3;
  let g = -1.2684380046 * l3 + 2.6097574011 * m3 - 0.3413193965 * s3;
  let b_rgb = -0.0041960863 * l3 - 0.7034186147 * m3 + 1.7076147010 * s3;
  
  // Convert linear RGB to sRGB
  r = linearToSrgb(r);
  g = linearToSrgb(g);
  b_rgb = linearToSrgb(b_rgb);
  
  // Clamp and convert to 8-bit
  r = Math.max(0, Math.min(255, Math.round(r * 255)));
  g = Math.max(0, Math.min(255, Math.round(g * 255)));
  b_rgb = Math.max(0, Math.min(255, Math.round(b_rgb * 255)));
  
  // Convert to hex
  return &apos;#&apos; + [r, g, b_rgb].map(x =&gt; x.toString(16).padStart(2, &apos;0&apos;)).join(&apos;&apos;);
}

function linearToSrgb(val) {
  // Clamp to valid range first
  val = Math.max(0, Math.min(1, val));
  
  if (val &lt;= 0.0031308) {
    return 12.92 * val;
  }
  return 1.055 * Math.pow(val, 1 / 2.4) - 0.055;
}

// Example usage:
console.log(oklchToHex(&apos;oklch(70% 0.15 30)&apos;));      // Warm orange
console.log(oklchToHex(&apos;oklch(0.5 0.2 200deg)&apos;));   // Blue
console.log(oklchToHex(&apos;oklch(90% 0.05 120)&apos;));     // Light green
console.log(oklchToHex(&apos;oklch(0.3 0.1 300deg)&apos;));   // Dark purple
</code></pre>
<!--kg-card-end: markdown--><p>Happy color converting!</p>]]></content:encoded></item><item><title><![CDATA[Real-time with Mercure and FrankenPHP]]></title><description><![CDATA[FrankenPHP ships with Mercure support out of the box. But configuration details are wanting... Let's clear things up a bit!]]></description><link>https://willvincent.com/2025/11/29/real-time-with-mercure-and-frankenphp/</link><guid isPermaLink="false">692ab65ff5bda1000178a5c8</guid><category><![CDATA[Laravel]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Programming]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Sat, 29 Nov 2025 09:31:35 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1685381949388-bb0402fbe133?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fG5vdGlmaWNhdGlvbnxlbnwwfHx8fDE3NjQzOTUxOTN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1685381949388-bb0402fbe133?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fG5vdGlmaWNhdGlvbnxlbnwwfHx8fDE3NjQzOTUxOTN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Real-time with Mercure and FrankenPHP"><p><a href="https://frankenphp.dev/">FrankenPHP</a> is pretty badass, especially when used with <a href="https://laravel.com/docs/12.x/octane">Laravel Octane</a> to make your Laravel app really fly. But what about real-time feedback to the frontend?</p><p>As it happens, FrankenPHP ships with Mercure support out of the box. But configuration can be a little challenging. Having just fought with it for longer than I&apos;d like to admit, I thought I&apos;d share some of the useful tidbits I learned, in hopes of saving someone else the brain damage.</p><p>For my purposes, I&apos;m running the <a href="https://serversideup.net/open-source/docker-php/docs/image-variations/frankenphp">ServerSideUp FrankenPHP Docker</a> image, the first challenge was deciphering just how to go about getting it setup at all because the docs there only mention that it&apos;s available, but not how to configure.</p><p>At first I just tried setting environment variables in my docker config, which seemed to .. sort of work, but I just continually received 403 errors. Then I decided to try the <code>CADDY_SERVER_EXTRA_DIRECTIVES</code> env var, and define the mercure module config:</p><pre><code>CADDY_SERVER_EXTRA_DIRECTIVES: |
  mercure {
    publisher_jwt !mySuperSecretKey!
    anonymous
  }</code></pre><p>After that change, I was getting a 401 response.. progress, but still no joy.</p><p>I&apos;ll be brief; the jwt secret needs to be surrounded by <code>!</code> if it&apos;s a hash (base64, for instance) but if it&apos;s <em>hex</em>, you need to <em>omit</em> the <code>!</code> or it&apos;ll misinterpret the key and your properly signed request will be treated as invalid.</p><p>So, tl;dr which doesn&apos;t seem to be documented very well anywhere:</p><pre><code>openssl rand -base64 48</code></pre><p>Produces something like: <br>7/SrS1rBcVyw2NK2sj2VD2guaK6PNlfnIDIHse6mBpWE7N2jTt1pXldMnKsmA</p><p>Surround it with <code>!</code></p><pre><code>openssl rand -hex 48</code></pre><p>Produces something like:<br>483b2395ceaee186816bfa6972d4a8a8d7025c838220c4fe2e3dfa43e907ee7</p><p>Do <em>NOT</em> surround it with <code>!</code></p><hr><p>The other issue I ran into was that it complained that it could not open the boltDB for writing, that was easily solved by explicitly defining the transport path to a writeable directory, in my case one I mounted from my host so it persists if I remove the container, and more important so I could just easily see that it created the file... &#x1F605;<br><br>My final env entry to get it working looks like this (not my actual keys):</p><pre><code>CADDY_SERVER_EXTRA_DIRECTIVES: |
  # Mercure Hub for real-time Server-Sent Events
  mercure {
    # Publisher JWT secreA
    publisher_jwt 123abc HS256

    # Subscriber JWT Secret
    subscriber_jwt 456xyz HS256

    # Allow anonymous listeners
    anonymous

    # Allow CORS from anywhere
    cors_origins *

    # Allow Publishing from anywhere (or maybe just your server)
    publish_origins *

    # Tell it where to write the bolt db, somewhere it CAN write
    transport bolt {
      path /opt/frankenphp/mercure.db
    }
  }</code></pre><p>All the <a href="https://mercure.rocks/docs/hub/config#directives">available mercure directives</a> can be used here.</p><hr><p>So, why go to the trouble? After all I could just use Reverb, right? Well websockets are inherently bi-directional, and I have no need for that, where as mercure is really intended for sending to client-side subscribers, and for mobile users, is easier on the battery/etc. It&apos;s a pretty cool protocol that I anticipate using more often moving forward. And, bonus - no need for another docker container or hoop jumping to run Reverb in my frankenphp container, which means I have a few more resources to spread around the other services, and one less container to orchestrate and manage.</p>]]></content:encoded></item><item><title><![CDATA[Making Pagefind Rerun Search on Browser Back Button]]></title><description><![CDATA[One thing that's bothered me about search on my JAMStack sites is the default behavior when moving through browser history.]]></description><link>https://willvincent.com/2025/05/22/making-pagefind-rerun-search-on-browser-back-button/</link><guid isPermaLink="false">682f9584f5bda1000178a57f</guid><category><![CDATA[Javascript]]></category><category><![CDATA[Programming]]></category><category><![CDATA[JAMStack]]></category><category><![CDATA[Miscellaneous]]></category><category><![CDATA[11ty]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Thu, 22 May 2025 21:35:39 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1586769852836-bc069f19e1b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fHNlYXJjaHxlbnwwfHx8fDE3NDc5NDczNTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1586769852836-bc069f19e1b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDN8fHNlYXJjaHxlbnwwfHx8fDE3NDc5NDczNTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Making Pagefind Rerun Search on Browser Back Button"><p>I&apos;ve been using <a href="https://pagefind.app">Pagefind</a> quite happily on my JAMStack sites for quite some time, but one thing that&apos;s always bothered me a little bit is the default behavior when moving through browser history.</p><p>By default, when you run a search and then click a result all works as you&apos;d expect, but when you click <code>Back</code> in my experience the last entered search term remains populated in the search field, but no search results are displayed - obviously because those get populated dynamically via javascript, and I assume some browser state is keeping the input field filled.</p><p>You <em>can</em> force it to run a search if there&apos;s a search string in the URL query parameters, I think a lot of people have discovered and implemented that, and it&apos;s great especially if you&apos;re adding the schema json to make search box available to google (whether they ever choose to enable it for your site or not)</p><p>But wouldn&apos;t it be great, if instead of a populated search field with no results, or it rerunning a potentially incorrect prior search when you navigate back, if it instead always reran your last search phrase? Whether that was from clicking a link with the search term baked in, using the google search box, or having manually entered a search phrase.</p><p>It <em>would</em> be great.. no, it <strong>IS</strong> great! Just a handful of JS makes it work.. we simply need to add some listeners and update the URL every time it changes, and then along with watching for a value populated to <code>q</code> in the URL, we also need to listen for <code>popstate</code> events to trigger search on back-button navigation.</p><p>Here&apos;s a bit of code that makes it go:</p><!--kg-card-begin: markdown--><pre><code class="language-js">&lt;script&gt;
  // This assumes you have your baseUrl defined in some site data
  // and that your search page lives at /search/
  // and that your search phrase is defined with the &quot;q&quot; GET parameter
  
  const BASE_URL = &apos;{{ site.baseUrl }}/search/&apos;;

  const makeUrl = term =&gt;
    term ? `${BASE_URL}?q=${encodeURIComponent(term)}` : BASE_URL;

  const debounce = (fn, ms = 250) =&gt; {
    let t;
    return (...args) =&gt; {
      clearTimeout(t);
      t = setTimeout(() =&gt; fn(...args), ms);
    };
  };

  window.addEventListener(&apos;DOMContentLoaded&apos;, () =&gt; {
    const pagefind = new PagefindUI({
      autofocus: true,
      element: &apos;#search&apos;,
      showSubResults: true,
    });

    const params = new URLSearchParams(location.search);
    const initial = params.get(&apos;q&apos;) || &apos;&apos;;
    if (initial) pagefind.triggerSearch(initial);

    const input = document.querySelector(&apos;input.pagefind-ui__search-input&apos;);
    const clearButton = document.querySelector(&apos;button.pagefind-ui__search-clear&apos;)

    input.addEventListener(&apos;input&apos;, debounce(e =&gt; {
      const term = e.target.value.trim();
      history.replaceState({ q: term }, &apos;&apos;, makeUrl(term));
    }));

    clearButton.addEventListener(&apos;click&apos;, debounce(e =&gt; {
      history.replaceState({}, &apos;&apos;, makeUrl(null));
    }));

    window.addEventListener(&apos;popstate&apos;, e =&gt; {
      const term =
        (e.state &amp;&amp; e.state.q) ||
        new URLSearchParams(location.search).get(&apos;q&apos;) ||
        &apos;&apos;;

      input.value = term; 
      if (term) pagefind.triggerSearch(term);
      else clearButton.click();
    });
  });
&lt;/script&gt;
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[PHP Fat Arrow Function Quirks]]></title><description><![CDATA[What is the actual point of fat arrow functions in PHP?]]></description><link>https://willvincent.com/2025/04/05/php-fat-arrow-function-quirks/</link><guid isPermaLink="false">67f0913af5bda1000178a4e6</guid><category><![CDATA[Programming]]></category><category><![CDATA[PHP]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Sat, 05 Apr 2025 02:42:21 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1525011268546-bf3f9b007f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGFycm93fGVufDB8fHx8MTc0MzgyMDg4Mnww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1525011268546-bf3f9b007f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGFycm93fGVufDB8fHx8MTc0MzgyMDg4Mnww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="PHP Fat Arrow Function Quirks"><p>I&apos;m beginning to wonder what that actual point of fat arrow functions in PHP really are. With the exception of multi-line logic, I figured they basically worked the same way they do in JavaScript, giving you full, direct access to the parent scope. But, that is clearly <em>not</em> how they work!</p><p>In JavaScript if you use the arrow function syntax you get direct access to the outer scope - SUPER handy! So in JS I might do something like this:</p><pre><code class="language-javascript">const foo = [1,2,3,4]
const bar = [&apos;one&apos;, &apos;two&apos;, &apos;three&apos;, &apos;four&apos;]

const baz = bar.flatMap((i) =&gt; [foo.shift(), i])

// baz === [1, &apos;one&apos;, 2, &apos;two&apos;, 3, &apos;three&apos;, 4, &apos;four&apos;]</code></pre><p>With the arrow function syntax I can access <code>foo</code> an it works as expected, giving me the first item from the array on each iteration.</p><p>In PHP, if I try the same:</p><pre><code class="language-php">$foo = [1,2,3,4];
$bar = [&apos;one&apos;, &apos;two&apos;, &apos;three&apos;, &apos;four&apos;];

// Using Laravel&apos;s collect for simplicity here:
$baz = collect($bar)
          -&gt;flatMap(fn ($i) =&gt; [array_shift($foo), $i])
          -&gt;toArray();

// $baz === [1, &apos;one&apos;, 1, &apos;two&apos;, 1, &apos;three&apos;, 1, &apos;four&apos;]</code></pre><figure class="kg-card kg-image-card"><img src="https://willvincent.com/content/images/2025/04/wtf-1.jpg" class="kg-image" alt="PHP Fat Arrow Function Quirks" loading="lazy" width="1010" height="626" srcset="https://willvincent.com/content/images/size/w600/2025/04/wtf-1.jpg 600w, https://willvincent.com/content/images/size/w1000/2025/04/wtf-1.jpg 1000w, https://willvincent.com/content/images/2025/04/wtf-1.jpg 1010w" sizes="(min-width: 720px) 720px"></figure><p>PHP gives me access to the outer scope with a fat arrow function, so I don&apos;t have to explicitly pass variables into my closure, which is nice, but.. they don&apos;t work the way I expect. I&apos;m <em>sure</em> there&apos;s a good PHP internals reason for this, and the best I can figure is that fat arrows are just syntactic sugar for a function that simply returns whatever is after the arrow, and makes <em>all the things</em> available as if you manually passed everything in a <code>use ()</code>, but it clearly does so by <em>COPY</em> not by <em>REFERENCE...</em><br><br>Because even if I use the more verbose closure syntax:</p><pre><code class="language-php">$foo = [1,2,3,4];
$bar = [&apos;one&apos;, &apos;two&apos;, &apos;three&apos;, &apos;four&apos;];

// Using Laravel&apos;s collect for simplicity here:
$baz = collect($bar)
          -&gt;flatMap(function ($i) use ($foo) {
              return [array_shift($foo), $i];
          })
          -&gt;toArray();

// $baz === [1, &apos;one&apos;, 1, &apos;two&apos;, 1, &apos;three&apos;, 1, &apos;four&apos;]</code></pre><p>I get the same thing. I have to specifically use the more verbose closure syntax <em>and</em> explicitly pass my <code>$foo</code> array into the closure by reference:</p><pre><code class="language-php">$foo = [1,2,3,4];
$bar = [&apos;one&apos;, &apos;two&apos;, &apos;three&apos;, &apos;four&apos;];

// Using Laravel&apos;s collect for simplicity here:
$baz = collect($bar)
          -&gt;flatMap(function ($i) use (&amp;$foo) {
              return [array_shift($foo), $i];
          })
          -&gt;toArray();

// $baz === [1, &apos;one&apos;, 2, &apos;two&apos;, 3, &apos;three&apos;, 4, &apos;four&apos;]</code></pre><p>This again, kind of feels unnecessary, because <code>array_shift()</code> explicitly accepts the array you pass into it <em>by reference</em> ... so one, like myself, might logically conclude that it should behave the way the JS version does.</p><p>Best I can figure is that in a closure, whatever you pass in, is passed <em>by COPY</em> on every iteration, unless you&apos;ve explicitly passed it in <em>by REFERENCE</em> &#x2013; and there&apos;s no way to <em>DO THAT</em> in a fat arrow function. &#x1F92C;<br><br>This leaves me wondering, truly, wtf is the actual point of fat arrow functions in PHP? They don&apos;t really seem to give that much value, other than slightly less verbosity - which, granted, is often a good thing.. but the lack of being able to do multiple lines, a la:</p><pre><code class="language-javascript">let foo = (bar) =&gt; {
  //do some stuff

  // do some other stuff

  return &apos;something&apos;
};</code></pre><p>...and apparently only giving restrictive access to the parent scope... what&apos;s the point? </p>]]></content:encoded></item><item><title><![CDATA[Improving Poor Array Validation Performance in Laravel]]></title><description><![CDATA[Since we don't live in an ideal world, sometimes need to validate unreasonably large http requests.]]></description><link>https://willvincent.com/2025/03/10/improving-array-validation-performance/</link><guid isPermaLink="false">67cf2ac0f5bda1000178a336</guid><category><![CDATA[Programming]]></category><category><![CDATA[Laravel]]></category><category><![CDATA[PHP]]></category><category><![CDATA[Spatie]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Mon, 10 Mar 2025 19:25:19 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1615028381441-f22568dc45b3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIyfHxzbG93fGVufDB8fHx8MTc0MTYzNDYzOXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1615028381441-f22568dc45b3?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIyfHxzbG93fGVufDB8fHx8MTc0MTYzNDYzOXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Improving Poor Array Validation Performance in Laravel"><p>Let&apos;s start by addressing the elephant in the room &#x2013; ideally one should <strong><em>not</em></strong> be posting <em>thousands</em> of rows in a request. They should instead send many smaller batches, or upload a file somewhere to process in chunks with a queue.</p><p><em>But, let&apos;s be real</em> - we don&apos;t live or work in an ideal world, and sometimes you just need the unreasonably large http request of data to work, and be validated in a reasonable amount of time (what <em>that</em> amount is, is debatable).</p><p>I love Laravel, I have always loved Laravel. I&apos;m in the midst of migrating a large API from a JS backend to Laravel, changing as little as possible about the api in the process because the application that consumes it <em>cannot</em> be rebuilt at this time. Thus, queues and the like for processing the data posted to the endpoint in question are off the table.</p><p>Now, there <em>is</em> already validation happening client-side on this data, so the naive approach might be to say that&apos;s sufficient and just assume good data has been submit, but it&apos;s never a good plan to trust user input, so that&apos;s off the table too &#x2013; and, I won&apos;t comment on whether or not that was how it used to be... &#x1F609;</p><p>What we&apos;re working with is a bit of general info for a top-level object, and then an array of <em>items</em> consisting mostly of a location name and address parts. These records will be attached as children to the top level object, for someone to work through and manually process later within the application.</p><p>For the most part, our validation rules are little more than <code>required</code> or <code>nullable</code> and <code>string</code> though there are also a couple date fields to set a start &amp; end date, that have to adhere to rules around their expected format, ensuring the start comes before the end, and that both dates fit within an allowable window. But still nothing too crazy.<br><br>Here&apos;s the validation rules I started with in my form request &#x2013; formatted for clarity &amp; string notation instead of arrays for brevity:</p><!--kg-card-begin: markdown--><pre><code class="language-php">
return [
  &apos;name&apos;                   =&gt; &apos;required&apos;,
  &apos;organization_id&apos;        =&gt; &apos;required|exists:organizations,id&apos;,
  &apos;tags&apos;                   =&gt; &apos;array&apos;,
  &apos;tags.*.name&apos;            =&gt; &apos;required&apos;,
  &apos;items&apos;                  =&gt; &apos;required|array|min:1|max:5000&apos;,
  &apos;items.*.name&apos;           =&gt; &apos;required|string&apos;,
  &apos;items.*.notes&apos;          =&gt; &apos;nullable|string&apos;,
  &apos;items.*.start_date&apos;     =&gt; &apos;required|date:Y-m-d|before_or_equal:items.*.end_date|&apos;.
                              &apos;before_or_equal:&apos;.$soonest.&apos;|after_or_equal:&apos;.$oldest,
  &apos;items.*.end_date&apos;       =&gt; &apos;required|date:Y-m-d|after_or_equal:items.*.start_date|&apos;.
                              &apos;before_or_equal:&apos;.$soonest.&apos;|after_or_equal:&apos;.$oldest,
  &apos;items.*.street_address&apos; =&gt; &apos;required|string&apos;,
  &apos;items.*.city&apos;           =&gt; &apos;required|string&apos;,
  &apos;items.*.state&apos;          =&gt; &apos;required|string&apos;,
  &apos;items.*.zip&apos;            =&gt; &apos;required|string|min:5|max:10&apos;,
  &apos;items.*.tags&apos;           =&gt; &apos;array&apos;,
  &apos;items.*.tags.*&apos;         =&gt; &apos;string&apos;,
];
</code></pre>
<!--kg-card-end: markdown--><p>Of course my first thought was that maybe the date logic was slowing things down, so I tried to remove it first.. but no joy - on small datasets it does alright, but once you start approaching the max allowed items of 5,000 it will consistently timeout. If I manually override the max execution time</p><!--kg-card-begin: markdown--><pre><code class="language-php">// Uncap the execution time, or even just set a large duration
ini_set(&apos;max_execution_time&apos;, 0);
</code></pre>
<!--kg-card-end: markdown--><p>it will complete, but in over a minute on my new M4 Pro mac mini, several minutes on my M2 macbook air.. too slow in any case. Especially when if we simply omit all of the per-row item validation <code>items.*.whatever</code> and simply ensure that items is required, and is an array of appropriate size, the full request lifecycle is only about 250ms from the time I submit a request with 5,000 items in the payload, until it creates all those records, and returns a successful response.</p><h3 id="yikes">Yikes!</h3><p>250ish milliseconds, vs &#x2013; <em>best case</em> &#x2013; 1 minute. </p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://willvincent.com/content/images/2025/03/pexels-mikhail-nilov-6963065.jpg" class="kg-image" alt="Improving Poor Array Validation Performance in Laravel" loading="lazy" width="1920" height="1358" srcset="https://willvincent.com/content/images/size/w600/2025/03/pexels-mikhail-nilov-6963065.jpg 600w, https://willvincent.com/content/images/size/w1000/2025/03/pexels-mikhail-nilov-6963065.jpg 1000w, https://willvincent.com/content/images/size/w1600/2025/03/pexels-mikhail-nilov-6963065.jpg 1600w, https://willvincent.com/content/images/2025/03/pexels-mikhail-nilov-6963065.jpg 1920w" sizes="(min-width: 1200px) 1200px"></figure><p>But, as we&apos;ve already covered, we obviously don&apos;t want to just <em>assume</em> the user input is valid and safe. Even though we&apos;re doing client side validation in the application that talks to this api. Eventually we may have outside consumption of the api, or someone could manually post to it &#x2013; assuming they have/can find proper authentication and whatnot. In any case.. we need to <em>solve</em> this.</p><p>There are <a href="https://github.com/laravel/framework/issues/49375">several</a> <a href="https://github.com/laravel/ideas/issues/2212">issues</a> on github and <a href="https://laracasts.com/discuss/channels/requests/request-rules-validation-is-very-slow">elsewhere</a> referencing this issue (that&apos;s been around for many years), wherein validation with the asterisk to apply rules to array items is <em>sloooow</em>... and while PRs are welcome, sadly the position of the core team thus far has simply been along the lines of &quot;well, you shouldn&apos;t be doing that sort of thing.&quot; Which, feels like the wrong response when we&apos;re talking about a framework that is all about developer experience, etc. But that&apos;s a potentially ranty tangent we need not go down.</p><p>The good news is - I&apos;ve found a solution that is gonna work for me!</p><p>Thanks to the good folks at <a href="http://spatie.be/">Spatie</a>, we have <a href="https://spatie.be/docs/laravel-data/v4/introduction">Laravel-Data</a> available, which can be used in place of Form Request Objects, and it supports validation of nested items. So naturally that would be one&apos;s first inclination (or maybe just mine) as the next thing to try.<br><br>So I replaced my FormRequest Object with a LaravelData object, added my first chunk of permissions, then created another data object for the items in my items array, and applied the relevant permissions to that.<br><br>Defining items as an array, and including detail in the PhpDoc block that it should be an array of my Item Data objects got validation working, and there was indeed some performance improvement, but not enough. It came down from about a minute to 35-45 seconds. </p><p>We&apos;ll call it maybe a 40% improvement. That&apos;s not bad, but I wondered whether I could do better because that was still pretty snail-like, and after all I was still looking at the unvalidated speed of 250ms and we&apos;re still way too far off of that mark to quit.</p><p>So, I reverted back to my prior Form Request object, which I already knew worked well if I <em>only</em> validated the top level stuff and basic validation that <em>items</em> was a required array of 1 - 5000 rows long.</p><p>Leaving my validation as just that in the form request object, I added one single line to the start of the method these requests gets routed to:</p><!--kg-card-begin: markdown--><pre><code class="language-php">$items = collect($request-&gt;validated(&apos;items&apos;))
  -&gt;map(
    fn($item) =&gt; ItemData::factory()
                   -&gt;alwaysValidate()
                   -&gt;from($item)
  );
</code></pre>
<!--kg-card-end: markdown--><p>So I&apos;m grabbing a collection from the <em>validated</em> items &#x2013; we know it&apos;s an array of appropriate size &#x2013; then <em>mapping</em> that into instances, ensuring the validation rules are applied, for each individual item. Instead of it applying array validation to the whole set.</p><p>The end result is that the same set of 5,000 records that previously took a minute or more, very probably simply timing out the request in most cases &#x2013; even if you actually have the ability to override max execution time &#x2013; now completes in about 2 seconds.</p><p>We&apos;re still not winning any major speed awards, but we do have validated data, and appropriate validation error message responses... did I forget to mention, if any of those objects fail to instantiate it&apos;ll throw a 422 response with validation errors? Anyway, we&apos;re validating the user input, and doing so in, what I&apos;ll consider a reasonable response time given that, as discussed at the beginning, this is definitely a <em>sub-optimal</em> way to ingest this data.</p><p>Also of note, in practice the users uploading this data are only sending at <em>most</em> a couple hundred records at a time, so for them this should be plenty fast. (100 completes in about 120ms)</p><p>So, here&apos;s where we ended up. Rules in the FormRequest object now look like this:</p><!--kg-card-begin: markdown--><pre><code class="language-php">
return [
  &apos;name&apos;            =&gt; &apos;required&apos;,
  &apos;organization_id&apos; =&gt; &apos;required|exists:organizations,id&apos;,
  &apos;tags&apos;            =&gt; &apos;array&apos;,
  &apos;tags.*.name&apos;     =&gt; &apos;required&apos;,
  &apos;items&apos;           =&gt; &apos;required|array|min:1|max:5000&apos;,
];
</code></pre>
<!--kg-card-end: markdown--><p>I have the collection map from above as the <em>first line</em> in my Invokable class that the endpoint is mapped to, and I have this Data object to handle validation of the individual items:</p><!--kg-card-begin: markdown--><pre><code class="language-php">class ValidOrderItemData extends Data
{
    public function __construct(
        public string $name,
        public ?string $notes,
        public string $start_date,
        public string $end_date,
        public string $street_address,
        public string $city,
        public string $state,
        public string $zip,
        public ?array $tags,
    ) {}

    public static function rules(): array
    {
        $soonest = now()-&gt;format(&apos;Y-m-d&apos;);
        $oldest = now()-&gt;subYear()-&gt;startOfMonth()-&gt;format(&apos;Y-m-d&apos;);

        return [
            &apos;name&apos; =&gt; [&apos;required&apos;, &apos;string&apos;],
            &apos;notes&apos; =&gt; [&apos;nullable&apos;, &apos;string&apos;],
            &apos;start_date&apos; =&gt; [&apos;required&apos;, &apos;date:Y-m-d&apos;, &apos;before_or_equal:end_date&apos;, &quot;before_or_equal:$soonest&quot;, &quot;after_or_equal:$oldest&quot;],
            &apos;end_date&apos; =&gt; [&apos;required&apos;, &apos;date:Y-m-d&apos;, &apos;after_or_equal:start_date&apos;, &quot;before_or_equal:$soonest&quot;, &quot;after_or_equal:$oldest&quot;],
            &apos;street_address&apos; =&gt; [&apos;required&apos;, &apos;string&apos;],
            &apos;city&apos; =&gt; [&apos;required&apos;, &apos;string&apos;],
            &apos;state&apos; =&gt; [&apos;required&apos;, &apos;string&apos;],
            &apos;zip&apos; =&gt; [&apos;required&apos;, &apos;string&apos;, &apos;min:5&apos;, &apos;max:10&apos;],
            &apos;tags&apos; =&gt; [&apos;array&apos;],
            &apos;tags.*&apos; =&gt; [&apos;string&apos;],
        ];
    }
}
</code></pre>
<!--kg-card-end: markdown--><p>...and just like that, I&apos;m validating a large array of data about 30x faster than using the wildcard property name validation rules will provide out of the box, and at the end of the day it&apos;s not really all that much more effort.</p>]]></content:encoded></item><item><title><![CDATA[Moving from Netlify to Cloudflare Pages]]></title><description><![CDATA[<p>I&apos;ve happily been using Netlify for several years, and was surprised to learn randomly, around 2020/2021 or so, that cloudflare quietly put out a competing product. My initial tests of cloudflare pages back when it was new were disappointing with excrutiatingly slow build times, but boy has</p>]]></description><link>https://willvincent.com/2024/02/18/moving-from-netlify-to-cloudflare-pages/</link><guid isPermaLink="false">65d28f3ace3fd60001658b09</guid><category><![CDATA[Hosting]]></category><category><![CDATA[11ty]]></category><category><![CDATA[cloudflare]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Sun, 18 Feb 2024 23:49:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1614359835514-92f8ba196357?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fG1vdmluZyUyMHZhbnxlbnwwfHx8fDE3MDgyOTgxNjh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1614359835514-92f8ba196357?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fG1vdmluZyUyMHZhbnxlbnwwfHx8fDE3MDgyOTgxNjh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Moving from Netlify to Cloudflare Pages"><p>I&apos;ve happily been using Netlify for several years, and was surprised to learn randomly, around 2020/2021 or so, that cloudflare quietly put out a competing product. My initial tests of cloudflare pages back when it was new were disappointing with excrutiatingly slow build times, but boy has that ever changed!</p><p>Cloudflare has definitely made big strides in boosting performance of builds to their pages deployments, and it&apos;s now just as efficient and quick as (maybe even quicker than) Netlify. So the time to jump was nigh.</p><p>But, if I was happy with Netlify, why change? &#xA0;Well the main reason was that cloudflare pages are served up with http/3 rather than just http/2. While that isn&apos;t a huge deal at the moment, I didn&apos;t feel like being stuck in the (recent) past as Netlify has shown no interest in upgrading.</p><p>Migration, ultimately, was very straightforward. I created a new pages project over at cloudflare pointed at my github repo</p><figure class="kg-card kg-image-card"><img src="https://willvincent.com/content/images/2024/02/Screenshot-2024-02-18-at-5.23.53-PM.jpg" class="kg-image" alt="Moving from Netlify to Cloudflare Pages" loading="lazy" width="2000" height="1232" srcset="https://willvincent.com/content/images/size/w600/2024/02/Screenshot-2024-02-18-at-5.23.53-PM.jpg 600w, https://willvincent.com/content/images/size/w1000/2024/02/Screenshot-2024-02-18-at-5.23.53-PM.jpg 1000w, https://willvincent.com/content/images/size/w1600/2024/02/Screenshot-2024-02-18-at-5.23.53-PM.jpg 1600w, https://willvincent.com/content/images/size/w2400/2024/02/Screenshot-2024-02-18-at-5.23.53-PM.jpg 2400w" sizes="(min-width: 720px) 720px"></figure><p>I then added a custom domain to this new pages setup:</p><figure class="kg-card kg-image-card"><img src="https://willvincent.com/content/images/2024/02/Screenshot-2024-02-18-at-5.24.40-PM.jpg" class="kg-image" alt="Moving from Netlify to Cloudflare Pages" loading="lazy" width="1914" height="966" srcset="https://willvincent.com/content/images/size/w600/2024/02/Screenshot-2024-02-18-at-5.24.40-PM.jpg 600w, https://willvincent.com/content/images/size/w1000/2024/02/Screenshot-2024-02-18-at-5.24.40-PM.jpg 1000w, https://willvincent.com/content/images/size/w1600/2024/02/Screenshot-2024-02-18-at-5.24.40-PM.jpg 1600w, https://willvincent.com/content/images/2024/02/Screenshot-2024-02-18-at-5.24.40-PM.jpg 1914w" sizes="(min-width: 720px) 720px"></figure><p>This required me to update my DNS to change the existing CNAME record from Netlify to point to cloudflare.</p><p>I already had redirects setup on my Netlify deployment, by publishing them to a <code>_redirects</code> file, in the format:</p><pre><code class="language-text">/old/path1		/new/redirected/path1		308
/old/path2		/new/redirected/path1		302
/old/path3		/temp/redirection			301</code></pre><p>Previous location, new location, redirect code. Pretty simple... and cloudflare pages support the <em>exact same format</em> in a file with the same name. Doesn&apos;t get any easier than that!</p><p>The only notable change I had to make was to add a new <code>_headers</code> file, because that previously was defined in a toml file for the netlify deployment. </p><p>My old netlify headers were defined thusly:</p><pre><code class="language-toml">[[headers]]
  for = &quot;/*&quot;
  [headers.values]
    Strict-Transport-Security = &quot;max-age=63072000; includeSubDomains; preload&quot;
</code></pre><p>The new header is defined a little more succinctly in a <code>_headers</code> file as noted...</p><pre><code class="language-text">/*
  Strict-Transport-Security: max-age=63072000; includeSubDomains; preload</code></pre><p>Basically just the applicable path to apply the header(s) to, and then the headers on indented lines following that. Pretty simple stuff.</p><p>I did have a function in place on netlify to push sitemap updates to google on deploy, but google has deprecated that endpoint so I didn&apos;t migrate the post-deploy function to cloudflare, but yep - it totally supports those too.</p><p>One of the other niceties of cloudflare pages is that it has built in analytics, which if all you care about is seeing core web vitals and basic visits, it could be a viable, lighter weight, replacement for google analytics:</p><figure class="kg-card kg-image-card"><img src="https://willvincent.com/content/images/2024/02/dash.cloudflare.com_addf8e87edde000d14e538e1edf88f0a_web-analytics_overview_siteTag-in-f41cbe23262443d1a601692324c33293-excludeBots-Yes-time-window-43200.png" class="kg-image" alt="Moving from Netlify to Cloudflare Pages" loading="lazy" width="2000" height="2427" srcset="https://willvincent.com/content/images/size/w600/2024/02/dash.cloudflare.com_addf8e87edde000d14e538e1edf88f0a_web-analytics_overview_siteTag-in-f41cbe23262443d1a601692324c33293-excludeBots-Yes-time-window-43200.png 600w, https://willvincent.com/content/images/size/w1000/2024/02/dash.cloudflare.com_addf8e87edde000d14e538e1edf88f0a_web-analytics_overview_siteTag-in-f41cbe23262443d1a601692324c33293-excludeBots-Yes-time-window-43200.png 1000w, https://willvincent.com/content/images/size/w1600/2024/02/dash.cloudflare.com_addf8e87edde000d14e538e1edf88f0a_web-analytics_overview_siteTag-in-f41cbe23262443d1a601692324c33293-excludeBots-Yes-time-window-43200.png 1600w, https://willvincent.com/content/images/size/w2400/2024/02/dash.cloudflare.com_addf8e87edde000d14e538e1edf88f0a_web-analytics_overview_siteTag-in-f41cbe23262443d1a601692324c33293-excludeBots-Yes-time-window-43200.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>For just basic visits, by page &amp; country/etc, this is totally serviceable data and you may not need anything beyond this. It&apos;s really incredible that this is all <em>free</em>!</p><p>The last bit of config I did was to ensure that my site is only accessible at the custom domain, not the <code>.pages.dev</code> domain &#x2013; other than preview deploys, which, oh yes are also included, and better can be locked down to only be accessible to people you specify should be allowed to access them!</p><p>I left the netlify deployment in place to ensure that I wouldn&apos;t break things for anyone in case the DNS hadn&apos;t propagated, but now that it&apos;s been humming along nicely on cloudflare for over a week, I should probably get around to removing that. </p><p>In all, this was easily the <strong><em>most painless</em></strong> migration I&apos;ve ever done.</p><p>In case you&apos;re curious - the site in question is the site for my voiceover business, which is built with the static sitebuilder, Eleventy, Tailwind CSS and a sprinkle of AlpineJS. I&apos;m very pleased with how easy it is to maintain and extend, and especially with how quickly it <em>builds</em> and even more so how fast it&apos;s served.</p><p>If you&apos;re in the market for any voice work, be it for a video game, phone messaging, training materials for an app, or anything else &#x2013; I&apos;d love to hear from you! Check out the site over here at <a href="https://www.willvincentvoice.com/">willvincentvoice.com</a>.</p><h2 id="update">Update: </h2><p>So there was one small issue I had to sort out when I shut off Netlify. Because cloudflare pages don&apos;t allow you to setup an A record, unless you let cloudflare manage the domain anyway, and Netlify <em>did,</em> I needed to setup a redirect from the non-www version of my domain to the www version that is setup as the custom domain for the cloudflare pages hosted site.</p><p>Conveniently, I&apos;m already running a bunch of miscellaneous services with docker on my server, and I use traefik to shuttle traffic around between those various docker instances. So setting up a redirect was basically just a matter of configuring a simple container. While in theory I should be able to do it with <em>just</em> traefik, I couldn&apos;t get it to work and ended up using the <code>morbz/docker-web-redirect</code> docker container setting the env var <code>VIRTUAL_HOST</code> to my non-www domain, and the <code>REDIRECT_TARGET</code> to the www version. I also included relevant traefik config to rewrite http to https, etc.</p><p>So, one small wrinkle if you&apos;re not letting cloudflare manage your domain I guess.</p>]]></content:encoded></item><item><title><![CDATA[Querying & Paginating S3 Data]]></title><description><![CDATA[<p>There&apos;s no denying that Amazon AWS&apos;s S3 product is pretty fantastic and very flexible. It&apos;s especially interesting when you leverage S3 Select to query your CSV, JSON, or Parquet files.</p><p>If you&apos;re not familiar, Amazon S3 Select allows you to query your</p>]]></description><link>https://willvincent.com/2023/07/11/querying-and-paginating-s3-data/</link><guid isPermaLink="false">64ad8258a49fbf0001ea410c</guid><category><![CDATA[Programming]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Tue, 11 Jul 2023 16:52:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1569396116180-210c182bedb8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE3fHxiaWclMjBkYXRhfGVufDB8fHx8MTY4OTA5MDAzOXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1569396116180-210c182bedb8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE3fHxiaWclMjBkYXRhfGVufDB8fHx8MTY4OTA5MDAzOXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Querying &amp; Paginating S3 Data"><p>There&apos;s no denying that Amazon AWS&apos;s S3 product is pretty fantastic and very flexible. It&apos;s especially interesting when you leverage S3 Select to query your CSV, JSON, or Parquet files.</p><p>If you&apos;re not familiar, Amazon S3 Select allows you to query your files for data using a simplified SQL syntax. It <em>is</em> SQL syntax but simplified because it only supports a fairly small subset of the SQL language. But that&apos;s ok, there&apos;s still a LOT that can be done:<br></p><ul><li>Count queries, to find the number of matching results or number of lines in a file.</li><li>Simple WHERE clauses, to grab a subset of data, to populate a chart on a website for instance.</li><li>Pagination... sort of. This is what this article is about.</li></ul><p>Suppose you have a costly query that takes your primary data source minutes or more to run, you might want to cache those results so that it&apos;s less painful to use elsewhere. This might be a good case for something like Mongo, DynamoDB, etc. but S3 is also a perfectly viable option.</p><p>Say you output a report as a CSV, and store it in S3 to later be displayed as tabular data on a website. That&apos;s pretty straightforward. But suppose the file contains hundreds, thousands, or millions of rows. You surely wouldn&apos;t want to try to display a file that large in the browser - any sane person would want to paginate that data</p><p>S3 Select provides us with the LIMIT clause to specify how many rows we want to select, which gets us halfway there. Anyone who&apos;s manually implemented pagination queries knows that it&apos;s a function of LIMIT and OFFSET clauses to grab a window of desired size from somewhere within the overall result set. </p><p>Unfortunately, S3 Select does <em>not</em> offer the OFFSET clause. &#x2639;&#xFE0F;</p><p>As it turns out though, the solution isn&apos;t too terrible. As long as you&apos;re able to inject a row counter or serialized id into your report data when you store it in S3 for later consumption, you can easily replace the OFFSET clause with a WHERE clause:</p><!--kg-card-begin: markdown--><pre><code>SELECT * FROM s3Object LIMIT 5 OFFSET 10
</code></pre>
<p>(which does not work)</p>
<p>is functionally the same as</p>
<pre><code>SELECT * FROM s3Object WHERE row &gt; 10 LIMIT 5
</code></pre>
<p>(which does work)</p>
<!--kg-card-end: markdown--><p>I&apos;ve recently implemented this solution for reporting activities in one of the apps I&apos;m responsible for at my day job because certain reporting was taking so long to run that things were timing out for folks, so we&apos;ve instead moved to generate those asynchronously, and firing a callback when the report completes.</p><p>This allows reports that previously wouldn&apos;t finish to not only finish but for those that did but were still slow to perform more efficiently when flipping through pages, as otherwise, previously the on-demand requests were run for every pagination page-turn. So if page 1 took say 30 seconds to respond, so did page 2, etc. However, with the asynchronous generation the whole report might take a minute or two to run, but then pagination is speedy as it&apos;s just fetching a subset of already prepared rows from a file on S3.<br><br>It probably would still be even more performant to populate the report data into a database, but since people download the CSV file, we&apos;d end up generating it or streaming the data in that format at some point <em>anyway</em> so the little bit of performance we give up is an acceptable trade-off.<br><br>You could also <em>probably</em> accomplish this with the <code>ScanRange</code> parameter, providing start and end bytes to define a range of the file to scan. But given that it&apos;s based on byte sizes rather than row counts, I&apos;m not sure how you could reliably paginate equal-sized pages of data this way. That would probably be better for processing chunks of data, rather than for display.</p><p>Anyway, now you know how to work around the lack of the OFFSET clause with S3 Select. Happy coding!</p>]]></content:encoded></item><item><title><![CDATA[Implementing Dynamic Multiselect with Laravel Livewire and Alpine using ChoicesJS]]></title><description><![CDATA[How to fetch dynamic select options from the server or an API for ChoicesJS with a Laravel Livewire / AlpineJS Component (the TALL stack).]]></description><link>https://willvincent.com/2022/08/03/implementing-dynamic-multiselect-with-laravel-livewire-and-alpine-using-choicesjs/</link><guid isPermaLink="false">62eae8c0340f8a000135bd31</guid><category><![CDATA[Programming]]></category><category><![CDATA[Laravel]]></category><category><![CDATA[AlpineJS]]></category><category><![CDATA[Livewire]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Wed, 03 Aug 2022 22:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1525547719571-a2d4ac8945e2?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI2fHxzZWFyY2h8ZW58MHx8fHwxNjU5NTYzNTE1&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1525547719571-a2d4ac8945e2?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI2fHxzZWFyY2h8ZW58MHx8fHwxNjU5NTYzNTE1&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Implementing Dynamic Multiselect with Laravel Livewire and Alpine using ChoicesJS"><p>I recently had a need to populate the available options for a multiselect component with values from a remote API. ChoicesJS supports this, though it&apos;s documentation leaves much to be desired. As such, a new post detailing how I made my dynamic multiselect work seemed important.</p><p>What exactly do I mean by &quot;dynamic multiselect&quot; you ask? Simply that, it&apos;s a select component allowing multiple choices, whose options are fetched - dynamically - from somewhere else, not preloaded up front.</p><p>The scenario here is that my users need to be able to select one or more resources, but they may have hundreds or thousands in the list of available resources, so loading all the options up front to populate the list of selectable options was out of the question. That would have greatly simplified things of course.</p><p>So, what is the desired functionality, and how do you build it?</p><p>I&apos;m glad you asked. The desired behavior is that when a user enters their search term into the input field ChoicesJS presents us, we want to trigger an API call that will fetch the matched resources and populate our list of available options, allowing them to make one or more selections. Then, they can optionally search again, selecting more options, and so forth.</p><p>Since we&apos;re using Livewire, all of the api interaction is going to happen server side, the beauty of this is that no credentials need to be exposed to the client side, and any heavy data manipulation will also occur server side.</p><h3 id="ok-first-things-first-we-need-a-livewire-component">Ok, first things first, we need a livewire component</h3><!--kg-card-begin: markdown--><pre><code class="language-php">&lt;?php

namespace App\Http\Livewire;

use Livewire\Component;
use MyApi\Client;

class Select extends Component
{
  public $options = [];
  public $selections = [];

  public function render()
  {
    return view(&apos;livewire.select&apos;);
  }
}
</code></pre>
<p>That&apos;s the basics done, but we also need to write a method to call when the user executes a search:</p>
<pre><code class="language-php">public search($term)
{
  $results = Client::search($term);

  $preserve = collect($this-&gt;options)
                -&gt;filter(fn ($option) =&gt;
                    in_array(
                      $option[&apos;value&apos;],
                      $this-&gt;selections
                    )
                  )
                -&gt;unique();

  $this-&gt;options = collect($results)
                     -&gt;map(fn ($item) =&gt; 
                             [
                               &apos;label&apos; =&gt; $item-&gt;name,
                               &apos;value&apos; =&gt; $item-&gt;id
                             ])
                     -&gt;merge($preserve)
                     -&gt;unique();

  $this-&gt;emit(&apos;select-options-updated&apos;, $this-&gt;options);
}
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>What&apos;s happening here?</p>
<ol>
<li>We pass the entered search phrase to the search functionality of our api client. This could also be something as basic as a <code>whereLike</code> query against a laravel model.</li>
<li>We look at the existing options, and pluck out any that are currently selected.</li>
<li>We map the relevant data from the search results into the format expected by our ChoicesJS widget, and then merge in the options that we grabbed in the previous step.</li>
<li>Emit an event so that the JS widget can update its options list.</li>
</ol>
<p>Pretty straightforward... but why step 2, and merging results of it into the search query results?</p>
<p>Well, because the way we have choices wired up, it isn&apos;t aware of choices that aren&apos;t presently in the available options list. You could probably store the entire object in there if you really wanted to, but then you&apos;d be hauling around a lot more data on subsequent request/response calls... and this is pretty easy - and neat.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="thatsthephpsidesortednowontotheview">That&apos;s the PHP side sorted, now on to the view</h2>
<pre><code class="language-php">@props([
  &apos;options&apos; =&gt; [],
])

@once
  @push(&apos;css&apos;)
    &lt;link rel=&quot;stylesheet&quot; href=&quot;https://cdn.jsdelivr.net/npm/choices.js/public/assets/styles/choices.min.css&quot; /&gt;
  @endpush
  @push(&apos;js&apos;)
    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/choices.js/public/assets/scripts/choices.min.js&quot;&gt;&lt;/script&gt;
  @endpush
@endonce

&lt;div x-data=&quot;{}&quot; x-init=&quot;&quot;&gt;
  &lt;select x-ref=&quot;select&quot;&gt;&lt;/select&gt;
&lt;/div&gt;
</code></pre>
<p>There&apos;s our basic view framework stubbed out. Notice the use of @once, and @push to push the requisite css &amp; js into relevant stacks, and ensure they&apos;re only added one time, so if you had multiple instances of this component on you page you wouldn&apos;t be polluting the DOM with several copies of the same script. Glorious. It&apos;s all the little niceties like this that make me love Laravel...</p>
<p>Ok, so we&apos;re loading the choices script and styles, now we need to point them at our lonely little select element and make it go. One piece at a time here..</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="letsfleshoutthexdata">Lets flesh out the x-data</h3>
<pre><code class="language-php">x-data=&quot;{
  value: @entangle(&apos;selections&apos;),
  options: {{ json_encode($options) }},
  debounce: null,
}&quot;
</code></pre>
<ul>
<li>The <strong>value</strong> gets entangled, linked with, the livewire state.</li>
<li><strong>options</strong> get populated with any pre-defined values we may have set by default in our php component</li>
<li><strong>debounce</strong> will be used as a target to track a setTimeout() to debounce user input, you don&apos;t <em>really</em> need to define it, but for clarity I&apos;ve included it.</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="nowthemeatoftheimplementationthexinit">Now the meat of the implementation, the x-init</h3>
<pre><code class="language-php">x-init=&quot;this.$nextTick(() =&gt; {
  const choices = new Choices(this.$refs.select, {
    removeItems: true,
    removeItemButton: true,
    duplicateItemsAllowed: false,
  })

  const refreshChoices = () =&gt; {
    const selection = this.value

    choices.clearStore()

    choices.setChoices(this.options.map(({ value, label }) =&gt; ({
      value,
      label,
      selected: selection.includes(value),
    })))
  }


  this.$refs.select.addEventListener(&apos;change&apos;, () =&gt; {
    this.value = choices.getValue(true)
  })

  this.$refs.select.addEventListener(&apos;search&apos;, async (e) =&gt; {
    if (e.detail.value) {
      clearTimeout(this.debounce)
      this.debounce = setTimeout(() =&gt; {
        $wire.call(&apos;search&apos;, e.detail.value)
      }, 300)
    }
  })

  $wire.on(&apos;select-options-updated&apos;, (options) =&gt; {
    this.options = options
  })

  this.$watch(&apos;value&apos;, () =&gt; refreshChoices())
  this.$watch(&apos;options&apos;, () =&gt; refreshChoices())

  refreshChoices()
})&quot;
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><p>Alright there&apos;s a bunch going on there, but nothing to terribly complicated.</p>
<ol>
<li>We instantiate an instance of ChoicesJS, pointing at the select element we previously created by it&apos;s x-ref value &apos;select&apos;, with some basic config.</li>
<li>Create a function that will refresh the available options whenever we call it, iterating through options and setting selected to true for any items whose value is in our selections wire model bucket.</li>
<li>Set up an event listener to sync the selections whenever a change event fires.</li>
<li>Set up an event listener to call our PHP search method, debounced with a 300ms timeout.</li>
<li>Set up a listener for the livewire event that we emit when we&apos;ve updated the options on the server side, and accordingly update them on the client side.</li>
<li>Set up watchers that call our refresh function when the selections or options have changed.</li>
<li>Finally, fire the refresh function to setup the initial state of the choicesJS widget</li>
</ol>
<p>This is all wrapped in a <code>$nextTick</code> because the markup and JS libraries/etc need to be on the page before it will work.</p>
<!--kg-card-end: markdown--><p>I can&apos;t take credit for all of this, Caleb nicely outlined how to work with the choicesJS library along with alpine in an integrations write up on the alpineJS site. But the search and $wire.on bits are all me.</p><p>So that&apos;s it. When you type into the choicesJS widget, it&apos;ll hit the api client and fetch results, those results will populate into the choices list. When you make a selection it&apos;ll work the same is if it&apos;d been part of a pre-populated list all along. When you run a new search, previously selected elements will be merged with new search results so that the prior selects stay visible in the choicesJS widget.</p><p>You probably noticed there&apos;s no <em>wire:model </em>defined on the select element... well, that&apos;s because we&apos;re managing the state with alpine, so it&apos;s not necessary. choicesJS is going to override that select element anyway.</p><h3 id="our-final-livewire-component-view">Our final livewire component &amp; view</h3><!--kg-card-begin: markdown--><h3 id="livewirephpcomponent">Livewire PHP Component</h3>
<pre><code class="language-php">&lt;?php

namespace App\Http\Livewire;

use Livewire\Component;
use MyApi\Client;

class Select extends Component
{
  public $options = [];
  public $selections = [];

  public function render()
  {
    return view(&apos;livewire.select&apos;);
  }

  public search($term)
  {
    $results = Client::search($term);

    $preserve = collect($this-&gt;options)
                  -&gt;filter(fn ($option) =&gt; 
                    in_array(
                      $option[&apos;value&apos;],
                      $this-&gt;selections
                    )
                  )
                  -&gt;unique();

    $this-&gt;options = collect($results)
                       -&gt;map(fn ($item) =&gt;
                         [
                           &apos;label&apos; =&gt; $item-&gt;name,
                           &apos;value&apos; =&gt; $item-&gt;id
                         ])
                       -&gt;merge($preserve)
                       -&gt;unique();

    $this-&gt;emit(&apos;select-options-updated&apos;, $this-&gt;options);
  }
}

</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h3 id="bladeview">Blade View</h3>
<pre><code class="language-php">@props([
 &apos;options&apos; =&gt; [],
])

@once
  @push(&apos;css&apos;)
    &lt;link rel=&quot;stylesheet&quot; href=&quot;https://cdn.jsdelivr.net/npm/choices.js/public/assets/styles/choices.min.css&quot; /&gt;
  @endpush
  @push(&apos;js&apos;)
    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/choices.js/public/assets/scripts/choices.min.js&quot;&gt;&lt;/script&gt;
  @endpush
@endonce


&lt;div
  wire:ignore

  x-data=&quot;{
    value: @entangle(&apos;selections&apos;),
    options: {{ json_encode($options) }},
    debounce: null,
  }&quot;

  x-init=&quot;
    this.$nextTick(() =&gt; {
      const choices = new Choices(this.$refs.select, {
        removeItems: true,
        removeItemButton: true,
        duplicateItemsAllowed: false,
     })

     const refreshChoices = () =&gt; {
       const selection = this.value
  
       choices.clearStore()

       choices.setChoices(this.options.map(({ value, label }) =&gt; ({
         value,
         label,
         selected: selection.includes(value),
       })))
     }

     this.$refs.select.addEventListener(&apos;change&apos;, () =&gt; {
       this.value = choices.getValue(true)
     })

     this.$refs.select.addEventListener(&apos;search&apos;, async (e) =&gt; {
       if (e.detail.value) {
         clearTimeout(this.debounce)
         this.debounce = setTimeout(() =&gt; {
           $wire.call(&apos;search&apos;, e.detail.value)
         }, 300)
       }
     })

     $wire.on(&apos;select-options-updated&apos;, (options) =&gt; {
       this.options = options
     })

     this.$watch(&apos;value&apos;, () =&gt; refreshChoices())
     this.$watch(&apos;options&apos;, () =&gt; refreshChoices())

     refreshChoices()
   })&quot;&gt;

  &lt;select x-ref=&quot;select&quot;&gt;&lt;/select&gt;

&lt;/div&gt;

</code></pre>
<!--kg-card-end: markdown--><h3 id="final-thoughts">Final Thoughts</h3><p>This component could, and probably should, be cleaned up to be more reusable. Making the event listened to definable as a prop passed into the blade component and so forth, but this should give you enough of an overview to get something like it working on your project.</p><p>As always, huge thanks to the community that makes any of this possible. Laravel, Livewire, AlpineJS and Tailwind truly make development a dream - I am a huge huge fan of the TALL stack!</p>]]></content:encoded></item><item><title><![CDATA[Redirects with 11ty and Netlify]]></title><description><![CDATA[Static site builders are pretty great, but how do you go about setting up redirects when you inevitably move or replace a page? Netlify & 11ty make it easy.]]></description><link>https://willvincent.com/2022/07/27/redirects-with-11ty-and-netlify/</link><guid isPermaLink="false">62e0ab33340f8a000135bc79</guid><category><![CDATA[Javascript]]></category><category><![CDATA[Miscellaneous]]></category><category><![CDATA[Programming]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Wed, 27 Jul 2022 03:37:54 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1613939437942-12c2ac447fe1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGRldG91cnxlbnwwfHx8fDE2NTg4OTI4NTU&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1613939437942-12c2ac447fe1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGRldG91cnxlbnwwfHx8fDE2NTg4OTI4NTU&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Redirects with 11ty and Netlify"><p>Static site builders are pretty great, but how do you go about setting up redirects when you inevitably move or replace a page?</p><p>Thankfully, <a href="https://netlify.com">Netlify</a> expects that you will eventually need to setup 301 and 302 redirects, and even allows you to redirect with a 404 or 200 response code too. Neat! </p><p>Their <a href="https://docs.netlify.com/routing/redirects/redirect-options/">docs</a> cover the format of the file pretty clearly, so I won&apos;t spend too much time rehashing that here, but suffice it to say, you need to have a file named <code>_redirects</code> in the docroot, with one redirect per line. Obviously you could manually create that file and have it get passthru copied during the build of your static site, but who wants to manually maintain such a file when we can let the sitebuilder handle it <em>for</em> us?</p><p>I use <a href="https://www.11ty.dev">11ty</a>, not on this site, but for my voice over site (oh yeah, did you know I&apos;m also a <a href="https://www.willvincentvoice.com">voice over actor</a>?) - which is what we&apos;ll be discussing here. I imagine the process would be pretty similar for other sitebuilders but this is what I know so this is what we&apos;re covering... &#x1F60E;</p><p>I use nunjucks, liquid is basically the same syntax, so either way, create a file in your <code>src</code> directory called <code>_redirects.njk</code> (or .liquid as applicable).</p><!--kg-card-begin: markdown--><p>Within that file we need a little bit of front matter:</p>
<pre><code>---
permalink: /_redirects
eleventyExcludeFromCollections: true
---
</code></pre>
<p>This tells 11ty to keep it out of the <code>collections.all</code> page collection, and to ensure it gets written to the file <code>_redirects</code> in the docroot when the site is built.</p>
<!--kg-card-end: markdown--><p>Pretty easy so far, but that won&apos;t do anything other that create a blank file.</p><p>Ideally, we want to be able to just specify the <em>old</em> path in the front matter where the thing lives now or of the new page that replaced that old thing.</p><p>Say something like: <code>redirectFrom: /some/old/url</code></p><p>Or <em>maybe</em> we even want to be able to optionally redirect a bunch of old pages to this one new page, like we combined a bunch of things into one consolidated page - probably not the best move for SEO, but hey, you do you.</p><!--kg-card-begin: markdown--><p>In that case we might just want to do something like this:<br>
<code>redirectFrom: [&apos;/some/old/url&apos;, &apos;/some/other/old/url&apos;, &apos;/yet/another/url&apos;]</code></p>
<!--kg-card-end: markdown--><p>Alrighty then - lets get to it!</p><p>Lets think through the steps logically, we need to iterate through every page in our site, and check if it has a <code>redirectFrom</code> directive in its front matter, then determine if that&apos;s one url or an array of urls, and write out one redirect per line. For kicks, we&apos;ll also optionally support a <code>redirectCode</code> property.</p><p>First, lets open up the 11ty config file and add a new <code>filter</code> to determine if something is a string or not, this is how we&apos;ll check if our redirectFrom property is a string or an array - technically we probably ought to also check if it&apos;s an array, but we&apos;ll just assume it is if not a string since, we are the ones writing these files, after all...</p><!--kg-card-begin: markdown--><p>Right. An <code>is_string</code> filter, looks a little something like this:</p>
<pre><code>// file: .eleventy.js

module.exports = (config) =&gt; {
  // ..snip..
  
  config.addFilter(&apos;is_string&apos;, function(obj) {
    return typeof obj == &apos;string&apos;
  })
  
  // ..snip..
}
</code></pre>
<p>Usage is simple, <code>{% if thing | is_string %}</code> and so forth.</p>
<!--kg-card-end: markdown--><p>Alright, that&apos;s all the bits in place, now we can populate the rest of our <code>_redirects.njk</code> file:</p><!--kg-card-begin: markdown--><pre><code>---
permalink: /_redirects
eleventyExcludeFromCollections: true
---
{%- for page in collections.all -%}
  {%- if page.url and page.data.redirectFrom -%}
    {%- if page.data.redirectFrom | is_string -%}
      {{ page.data.redirectFrom }}  {{ page.url }}  {{ page.data.redirectCode or &apos;301&apos;}}
    {%- else -%}
      {%- for oldUrl in page.data.redirectFrom -%}
        {{ oldUrl }}  {{ page.url }}  {{ page.data.redirectCode or &apos;301&apos;}}
        {%- if not loop.last -%}
          {{ &apos;\n&apos; }}
        {%- endif -%}
      {%- endfor -%}
    {%- endif -%}
    {{ &apos;\n&apos; }}
  {%- endif -%}
{%- endfor -%}
</code></pre>
<!--kg-card-end: markdown--><p>So this will look at every page, and assuming that page has a url AND a redirectFrom, it&apos;ll check if the redirectFrom is a string and if so spit out the old url, new url and optional redirect code (defaulting to 301). &#xA0;If it&apos;s an array it&apos;ll do the same for each item in the array. Ensuring newlines are added after every individual line, without extra newlines (though that wouldn&apos;t <em>hurt</em> anything)</p><p>This will get rudimentary redirects working for your 11ty site hosted with Netlify straight away. What it does <em>not</em> cover is more complex redirects with query strings, params, and so forth.</p><p>Refer to the <a href="https://docs.netlify.com/routing/redirects/redirect-options/">Netlify docs</a> for details on the proper syntax for that. Config of that is beyond the scope of this blog post. While you&apos;re over there looking at the docs, take note you can also do fun <em>proxy</em> things by setting up redirects with 200 codes. Neat.</p>]]></content:encoded></item><item><title><![CDATA[Laravel Mix won't watch my changes.]]></title><description><![CDATA[Laravel Mix's watch and watch-poll commands have stopped working on one of my computers, for unknown reasons. Here's how I worked around the problem.]]></description><link>https://willvincent.com/2022/06/07/laravel-mix-wont-watch-my-changes/</link><guid isPermaLink="false">629ed25730efef00016cbdcc</guid><category><![CDATA[Laravel]]></category><category><![CDATA[Programming]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Tue, 07 Jun 2022 04:44:20 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1529672425113-d3035c7f4837?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGJpbm9jdWxhcnMlMjBraWR8ZW58MHx8fHwxNjU0NTc2NzUx&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1529672425113-d3035c7f4837?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGJpbm9jdWxhcnMlMjBraWR8ZW58MHx8fHwxNjU0NTc2NzUx&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Laravel Mix won&apos;t watch my changes."><p>I don&apos;t know why, it was working, I didn&apos;t change anything significant on my machine, it just stopped working. Weird part is that it still works on another machine!<br><br>So, something about how the watcher works (or doesn&apos;t in my case) - it&apos;s failing to detect filesystem events, apparently. Even more strangely, the <code>watch-poll</code> option doesn&apos;t work either. <br><br>When I run <code>yarn watch</code> or <code>yarn watch-poll</code> it&apos;ll compile once, immediately, and then sit there pretending to watch for changes, but never compile anything again. <br><br>SUPER annoying, and super frustrating. I lost a whole day chasing this, and ended up without any more insight than I started with.<br><br>What I did, eventually, end up with though - is a workaround of my own.<br><br>I have a <code>.scripts</code> directory in the root of my project for various dev scripts, namely a <code>release</code> script that updates my changelog file automatically from all the commits since the last release, thanks to <a href="https://github.com/conventional-changelog/standard-version">Standard Version</a> for making that painless (why oh why is it deprecated?!) - and also a <code>hotfix</code> script that will cherry pick specified commit(s) and push only <em>those</em> to the production branch.<br><br>My deployments are automatic thanks to github actions &amp; laravel vapor, so everything really is quite a breeze working on this project... or was, until this issue turned up.<br><br>Anyway, the workaround. I&apos;m sure I&apos;m not the only person to have to fight with this, as I know there are issues with the way filesystem events on macOS work.. (or, again, don&apos;t). <br><br>In my <code>.scripts</code> directory I added a <code>watch.js</code> file:</p><!--kg-card-begin: markdown--><pre><code class="language-js">const chokidar = require(&apos;chokidar&apos;);
const _debounce = require(&apos;lodash.debounce&apos;);
const path = require(&apos;node:path&apos;);
const { spawn } = require(&apos;child_process&apos;);

const mixPath = path.resolve(__dirname, &apos;..&apos;, &apos;node_modules/.bin/mix&apos;);

const mix = function () {
  const output = spawn(mixPath, [&apos;--no-progress&apos;]);
  output.stdout.pipe(process.stdout);
}


chokidar.watch([
   path.resolve(__dirname, &apos;..&apos;, &apos;resources/views/**/*.blade.php&apos;),
   path.resolve(__dirname, &apos;..&apos;, &apos;storage/framework/views/**/*.php&apos;),
   path.resolve(__dirname, &apos;..&apos;, &apos;vendor/**/*.blade.php&apos;),
]).on(&apos;all&apos;, _debounce(mix, 150));

</code></pre>
<!--kg-card-end: markdown--><p>The relevant dependencies should <em>already</em> be installed with laravel mix &amp; webpack, but just to be sure you might want to:<br><code>npm install -D chokidar lodash.debounce</code></p><p>I then added a new item to the scripts in the <code>package.json</code>: <br><code>&quot;watch-chokidar&quot;: &quot;node ./.scripts/watch.js&quot;</code><br><br>Running that command, watching works as expected once again, and properly re-triggers the <code>mix</code> command any time a file is added, removed, changed, etc.<br><br>If you find that it triggers more than once, especially on first run, try increasing the debounce time from 150 to something higher.<br><br>Enjoy, hope it helps someone else, and if not it felt good to vent a little bit. &#x1F60E;</p><p>EDIT: Just a week or two later, vite has replaced mix as the preferred/default bundler, and this is no longer a concern - also <em>WOW</em> it&apos;s way faster!</p>]]></content:encoded></item><item><title><![CDATA[Making Ghosts Fly]]></title><description><![CDATA[I improved my pagespeed score to 98% with minimal effort]]></description><link>https://willvincent.com/2020/05/23/making-ghosts-fly/</link><guid isPermaLink="false">5ec83948123313000135c439</guid><category><![CDATA[Miscellaneous]]></category><category><![CDATA[Ghost]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Sat, 23 May 2020 14:00:00 GMT</pubDate><media:content url="https://willvincent.com/content/images/2020/09/photo-1494889479060-a1576e190b0a.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://willvincent.com/content/images/2020/09/photo-1494889479060-a1576e190b0a.jpg" alt="Making Ghosts Fly"><p>I did this config on my previous server, and honestly don&apos;t recall exactly what my pagespeed score was before the changes but I want to say it was in the mid 60% range.</p><p>There is a simple change that can be made when running Ghost in docker, to make it perform better for the end user &#x2013; chuck it behind <a href="https://hub.docker.com/r/alash3al/lightify">lightify</a>, which will automagically apply a bunch of best practices - concatenation/compression of JS, CSS, etc for you. This lets you go on about the business of blogging without worrying about how to reconfigure things to eek out better pagespeed performance.</p><p>Now as you are probably already aware, I run all my various services in docker, proxied behind traefik. It&apos;s working fabulously and I have a bit better understanding of it after my recent efforts to move to a new server and reconfigure all the things.</p><p>Anyway, I&apos;ll keep it short and to the point. Here is basically what my docker-compose file looks lik for this ghost blog:</p><!--kg-card-begin: markdown--><pre><code>version: &quot;3&quot;
networks:
  proxy:
    external: true
services:
  ghost:
    image: ghost:3-alpine
    container_name: ghost
    restart: unless-stopped
    domainname: example.com
    expose:
      - &quot;2368&quot;
    networks:
      - proxy
    volumes:
      - ./config/config.production.json:/var/lib/ghost/config.production.json:ro
      - ./data:/var/lib/ghost/content
  lightify:
    image: alash3al/lightify
    entrypoint: [&quot;lightify&quot;, &quot;-http&quot;, &quot;:8880&quot;, &quot;--upstream=http://ghost:2368&quot;]
    restart: unless-stopped
    expose:
      - 8880
    networks:
      - proxy
    labels:
      - traefik.enable=true
      - traefik.docker.network=proxy
      # Next four lines handle redirect from www to non-www version of url:
      - traefik.http.middlewares.www-redirect.redirectregex.regex=^https://www.example.com/(.*)
      - traefik.http.middlewares.www-redirect.redirectregex.replacement=https://example/$${1}
      - traefik.http.middlewares.www-redirect.redirectregex.permanent=true
      - traefik.http.routers.lightify.middlewares=www-redirect
      - traefik.http.routers.lightify.entrypoints=https
      - traefik.http.routers.lightify.rule=Host(`example.com`)
      - traefik.http.routers.lightify.tls=true
      - traefik.http.routers.lightify.tls.certresolver=default
      - traefik.http.routers.lightify.tls.domains[0].main=example.com
      - traefik.http.routers.lightify.tls.domains[0].sans=*.example.com
</code></pre>
<!--kg-card-end: markdown--><p>That&apos;s basically it. Traefik points the desired host, in this case <code>example.com</code> at lightify, which in turn is a reverse proxy for the ghost container.</p><p>Additional nicities in this config, it will redirect from the www to non-www version of the domain. &#xA0;I also have automatic redirect from http to https setup globally within my traefik&apos;s docker-compose file, which I covered in my post yesterday about setting up <a href="https://willvincent.com/2020/05/22/running-mailcow-behind-traefik2/">Mailcow behind traefik</a>, but for brevity here&apos;s the relevant labels for that too:</p><!--kg-card-begin: markdown--><pre><code>services:
  traefik:
  
    # --- snipping out all the other stuff --
    
    labels:
      # Global Redirect to https
      - traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)
      - traefik.http.routers.http-catchall.entrypoints=http
      - traefik.http.routers.http-catchall.middlewares=redirect-to-https
      - traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https
</code></pre>
<!--kg-card-end: markdown--><p>That catches incoming http requests, for any host, and redirects them to https instead. Since traefik is grabbing certs for all the things anyway, this works nicely. Caveat&#x2013; I imagine it would <em>not</em> work so nice if you were dependent on the http challenge type for getting letsencrypt certs, so best to use the dns challenge and/or tls challenge.</p><p>There is one small gotcha I&apos;ve discovered with lightify, for which <a href="https://github.com/alash3al/lightify/issues/5">I have opened an issue on github</a>, if using <code>&lt;link rel=&quot;...</code> to load google web fonts, lightify is mangling the urls. The workaround is to instead direcly embed the stylesheets that get loaded. &#xA0;See the github issue for full details.</p>]]></content:encoded></item><item><title><![CDATA[Mailcow behind Traefik 2.x]]></title><description><![CDATA[I found it challenging to find up to date, clear instruction on setting this up, so this post collects the relevant info in one place.]]></description><link>https://willvincent.com/2020/05/22/running-mailcow-behind-traefik2/</link><guid isPermaLink="false">5ec80336123313000135c354</guid><category><![CDATA[Miscellaneous]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Fri, 22 May 2020 18:32:14 GMT</pubDate><media:content url="https://willvincent.com/content/images/2020/09/photo-1566232137428-27dd00f5c6bd.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://willvincent.com/content/images/2020/09/photo-1566232137428-27dd00f5c6bd.jpg" alt="Mailcow behind Traefik 2.x"><p>All of the information necessary to get mailcow functioning properly behind traefik 2.x already exists on the web, but I found it somewhat challenging to find all the relevant info in one place.<br><br>So first off, traefik. &#xA0;Lets setup a reasonably straightforward traefik instance that&apos;ll allow lots of services, and many domains, to run behind it...<br><br>Create directory, <code>/opt/traefik</code> and within it create ad directory <code>data</code> where we&apos;ll store our traefik.yml and acme.json. &#xA0;On that note, <code>touch data/acme.json</code> and <em>very importantly</em> <code>chmod 600 data/acme.json</code> ... that last point was a huge head scratcher when suddenly my cert resolver stopped working while I was trying to get wildcard certs setup. If the permissions on that file are less strict, traefik will throw out any resolvers associated with that for storage &#x2013; but then in logs doesn&apos;t mention that, just says &quot;resolver <em>foo</em> does not exist&quot; or whatever. Frustrating.<br><br>Ok.. so we&apos;ve got our <code>/opt/traefik</code> with a <code>data</code> directory that contains an empty <code>acme.json</code> file with appropriate permissions.<br><br>Now lets add a <code>traefik.yml</code> file in that data directory too with the following content:</p><!--kg-card-begin: markdown--><pre><code>api:
  dashboard: true

entryPoints:
  http:
    address: &quot;:80&quot;
  https:
    address: &quot;:443&quot;

tls:
  options:
    default:
      sniStrict: true
      minVersion: VersionTLS12

providers:
  docker:
    endpoint: &quot;unix:///var/run/docker.sock&quot;
    exposedByDefault: false

certificatesResolvers:
  default:
    acme:
      email: you@example.com
      storage: acme.json
      tlsChallenge: {}
      dnsChallenge:
        provider: digitalocean
        delayBeforeCheck: 0
      httpChallenge:
        entryPoint: http
</code></pre>
<!--kg-card-end: markdown--><p>Obviously change the email, and dnschallenge provider should be changed or omit as your situation dictates. I don&apos;t know for <em>sure</em> if having all three challenge types in there is correct, I imagine if one fails it&apos;ll try the next, etc.. but I don&apos;t know for certain. In any case its&apos; working for me, including wildcard cert generation.</p><p>Next create the <code>docker-compose.yml</code> file in your <code>/opt/traefik</code> directory with the following contents:</p><!--kg-card-begin: markdown--><pre><code>version: &apos;3&apos;

services:
  traefik:
    image: traefik:2.2
    container_name: traefik
    restart: always
    security_opt:
      - no-new-privileges:true
    networks:
      - proxy
    ports:
      - 80:80
      - 443:443
    environment:
      - DO_AUTH_TOKEN=YOUR_DIGITALOCEAN_TOKEN
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./data/traefik.yml:/traefik.yml:ro
      - ./data/acme.json:/acme.json
    labels:
      - traefik.enable=true
      - traefik.http.middlewares.traefik-auth.basicauth.users=USER:HASH
      - traefik.http.routers.traefik-secure.service=api@internal
      - traefik.http.routers.traefik-secure.entrypoints=https
      - traefik.http.routers.traefik-secure.rule=Host(`traefik.example.com`)
      - traefik.http.routers.traefik-secure.middlewares=traefik-auth
      - traefik.http.routers.traefik-secure.tls=true
      - traefik.http.routers.traefik-secure.tls.certResolver=default
      # Omit next two lines to not do wildcard certs:
      - traefik.http.routers.traefik-secure.tls.domains[0].main=example.com
      - traefik.http.routers.traefik-secure.tls.domains[0].sans=*.example.com

      # Global Redirect to https
      - traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)
      - traefik.http.routers.http-catchall.entrypoints=http
      - traefik.http.routers.http-catchall.middlewares=redirect-to-https
      - traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https

networks:
  proxy:
    external: true
</code></pre>
<!--kg-card-end: markdown--><p>Now since our proxy network is external we&apos;ll need to manually create it:</p><pre><code>docker network create web</code></pre><p>Lets get your basic auth user &amp; password setup too:</p><figure class="kg-card kg-code-card"><pre><code>htpasswd -nb admin secure_password</code></pre><figcaption>Note: htpasswd is not usually available by default in a fresh linux install</figcaption></figure><p>Use the hash output from that and replace <code>USER:HASH</code> in the `docker-compose.yml` file with the username you used, and the output hash.</p><p>All that&apos;s left to do for this part is fire that bad boy up:</p><figure class="kg-card kg-code-card"><pre><code>docker-compose up -d</code></pre><figcaption>Run from within /opt/traefik</figcaption></figure><p>Assuming you had docker, docker-compose, etc installed and your dns is setup, you should now be able to access traefik at traefik.yourdomain.com, and if you hit the <code>http</code> version it should automagically redirect to <code>https</code> ... that global redirect to https section of our docker-compose config is pretty magical, it catches any http traffic to <em>any</em> host and redirects to https, automatically. </p><p></p><h2 id="mailcow">Mailcow</h2><p>Follow all the steps for getting mailcow up and running from <a href="https://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/">their documentation</a>.<br><br>When editing the mailcow.conf file, be sure to set the following, since traefik will proxy the web connections, and handle cert generation:</p><!--kg-card-begin: markdown--><pre><code>HTTP_PORT=8080
HTTP_BIND=127.0.0.1
HTTPS_PORT=8443
HTTPS_BIND=127.0.0.1
SKIP_LETS_ENCRYPT=y
</code></pre>
<!--kg-card-end: markdown--><p>Before you start it though, add a new file, <code>docker-compose.override.yml</code> to the install directory <code>/opt/mailcow_dockerized</code> with the following content:</p><!--kg-card-begin: markdown--><pre><code>version: &apos;2.1&apos;

services:
  nginx-mailcow:
    expose:
      - 8080
    labels:
      - traefik.enable=true
      - traefik.http.routers.nginx-mailcow.rule=HostRegexp(`{host:(autodiscover|autoconfig|webmail|mail|email).+}`)
      - traefik.http.routers.nginx-mailcow.entrypoints=https
      - traefik.http.routers.nginx-mailcow.rule=Host(`${MAILCOW_HOSTNAME}`)
      - traefik.http.routers.nginx-mailcow.tls=true
      - traefik.http.routers.nginx-mailcow.tls.certresolver=default
      # Uncomment to use wildcard cert:
      # - traefik.http.routers.nginx-mailcow.tls.domains[0].main=example.com
      # - traefik.http.routers.nginx-mailcow.tls.domains[0].sans=*.example.com
      - traefik.http.routers.nginx-mailcow.service=nginx-mailcow
      - traefik.http.services.nginx-mailcow.loadbalancer.server.port=8080
      - traefik.docker.network=proxy
    networks:
      - proxy


  certdumper:
      image: humenius/traefik-certs-dumper
      network_mode: none
      command: --restart-containers mailcow_postfix-mailcow_1,mailcow_dovecot-mailcow_1,mailcow_nginx-mailcow_1
      volumes:
        - /opt/traefik/data:/traefik:ro
        - /var/run/docker.sock:/var/run/docker.sock:ro
        - ./data/assets/ssl:/output:rw
      environment:
        - DOMAIN=${MAILCOW_HOSTNAME}
        # If using wildcard certs instead of an explicit host cert,
        # use following line instead with just the TLD so certdumper
        # is able to find the cert.
        # - DOMAIN=YourDomain.com

networks:
  proxy:
    external: true
</code></pre>
<!--kg-card-end: markdown--><p>That should be it. &#xA0;This will get the mailcow web UI running behind traefik using either the FQDN for its cert, or wildcards.. note; if it&apos;s the same domain as traefik and traefik already has wildcard certs, you <em>must</em> use the domain name option instead of ${MAILCOW_HOSTNAME} for the domain env setting for cert dumper or it won&apos;t be able to find the cert, and if the same domain, you can omit the wildcard cert labels on nginx because that wildcard cert would already be generated by traefik anyway...<br><br>Ok, fire that puppy up:</p><figure class="kg-card kg-code-card"><pre><code>docker-compose up -d</code></pre><figcaption>Run from within /opt/mailcow_dockerized</figcaption></figure><p>That should get you up and running with mailcow proxied behind traefik. All the mail ports are open directly, but web traffic gets proxied.</p><p>If something isn&apos;t working, you probably missed a step here, omit a quote or something.. or you have other issues that are related to mailcow &#x2013; likely dns config or firewall issues. That&apos;s all beyond the scope of this post.</p>]]></content:encoded></item><item><title><![CDATA[Leveraging bulk inserts instead of CreateMany() in AdonisJS]]></title><description><![CDATA[Adonis' ORM is great, but it's very easy to ignore performance and ship something suboptimal without thinking about it. Here's how to avoid that.]]></description><link>https://willvincent.com/2019/11/06/leveraging-bulk-inserts-instead-of-createmany-in-adonis/</link><guid isPermaLink="false">5ec7752728f8a00001cfb74c</guid><category><![CDATA[Programming]]></category><category><![CDATA[Javascript]]></category><category><![CDATA[AdonisJS]]></category><dc:creator><![CDATA[Will Vincent]]></dc:creator><pubDate>Wed, 06 Nov 2019 23:29:46 GMT</pubDate><media:content url="https://willvincent.com/content/images/2020/09/photo-1525538182201-02cd1909effb.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://willvincent.com/content/images/2020/09/photo-1525538182201-02cd1909effb.jpg" alt="Leveraging bulk inserts instead of CreateMany() in AdonisJS"><p>Adonis is great. &#xA0;It, like Laravel, provides a lot of nice functionality that make life easy as a developer. The ORM is pretty pleasant to work with, however it&apos;s also very easy to ignore performance and ship something suboptimal without thinking about it.</p><!--kg-card-begin: markdown--><h2 id="whichofcourseisexactlywhatidid">Which, of course, is exactly what I did.</h2>
<!--kg-card-end: markdown--><p>I&apos;ll try to explain without giving away too much inside information about the application I work on full time. In one section of our application we allow users to search for businesses, to generate an audience comprised of many (up to 10,000) locations based on whatever criteria; business name, type of business, etc.</p><p>Once a user has made, and narrowed their selections, we generate polygonal data associated with each business&apos; physical location. I&apos;m not going to delve into what else happens as that&apos;s well outside the scope of this post. &#xA0;But suffice it to say, this action results in the creation of many records in the database. Three distinct types of record in fact:</p><p>First, there is the audience record itself, this is general metadata for the user&apos;s consumption within the application, a name that is shown within lists, etc.</p><p>Next, there are two different geometry-based records. One that, again is lots of metadata, the other that contains the actual polygon and so forth.</p><p>There are reasons it&apos;s split out the way it is... in the simplest of terms it&apos;s because the record with the actual polygon can be reused and in a previous iteration of the application that data was duplicated, in several instances <em>many</em> times over.. so normalization was one of the specific design goals when I rearchitected the data structure.</p><p>Anyway.. so those records all need to be created, and are all inter-related.. the geometric records <code>belongTo</code> the audience, and <code>belongTo</code> the polygon. So there&apos;s a bit of hoop jumping that has to occur to generate things in the correct order.</p><p>Previously I was filling the polygon records with the bare minimum content so that I could get an id to populate into the associated geometric records, and then a process further down the line was verifying the lat/lng of the address to see if a better match could be found, then generating the polygon and updating those records &#x2013; that process is no longer involved, so I can populate polygons immediately, which is nice.</p><p>But in either case, what I was winding up with was, essentially, an array of ids to populate into the geometric records as the relation id when I created those, and I was doing so making use of Adonis&apos; <code>createMany()</code> method. Which is specifically intended for that purpose. &#xA0;Of course most people probably aren&apos;t creating upwards of <em>ten thousand</em> model instances at a time... and given that these are individual queries, as is stated in the docs:</p><blockquote>The <code>createMany</code> method makes <strong><strong>n</strong></strong> number of queries instead of doing a bulk insert, where <strong><strong>n</strong></strong> is the number of rows.</blockquote><p>Not really ideal for my use case. &#xA0;But it&apos;s been working alright, just a little slower than ideal.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://willvincent.com/content/images/2020/09/photo-1502045856-4882464b27a9.jpg" class="kg-image" alt="Leveraging bulk inserts instead of CreateMany() in AdonisJS" loading="lazy" width="1080" height="1440" srcset="https://willvincent.com/content/images/size/w600/2020/09/photo-1502045856-4882464b27a9.jpg 600w, https://willvincent.com/content/images/size/w1000/2020/09/photo-1502045856-4882464b27a9.jpg 1000w, https://willvincent.com/content/images/2020/09/photo-1502045856-4882464b27a9.jpg 1080w" sizes="(min-width: 720px) 720px"><figcaption>Photo by <a href="https://unsplash.com/@iam_aspencer?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Andrew Spencer</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></figcaption></figure><!--kg-card-begin: markdown--><h3 id="enterbulkinserts">Enter: Bulk inserts</h3>
<!--kg-card-end: markdown--><p>To be fair, I <em>was already</em> using bulk inserts in the aforementioned bare-minimum insert to get polygon record ids.. a pretty clever query too in fact;</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">const polygon_id_result = await Database.raw(`
  INSERT INTO polygons (created_at)
    SELECT NOW()
    FROM generate_series(1,?) i
  RETURNING id`, [hits])
const polygon_ids = polygon_id_result.rows.map(r =&gt; r.id)
</code></pre>
<!--kg-card-end: markdown--><p><code>hits</code> here is the number of records. &#xA0;Postgres has that interesting generate_series function that creates an array.. so the query basically just says &quot;for these <em>n</em> array items, generate empty records containing just a created_at timestamp&quot;<br><br>Then I would do my createMany thusly:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">await Geometric.createMany(pois.map(poi =&gt; {
  return {
    name: poi.name,
    // other fields are not relevant... 
    audience_id: audience.id,
    polygon_id: polygon_ids.shift(),
  }
}))
</code></pre>
<!--kg-card-end: markdown--><p>See, that takes the array of polygon_ids and chucks them in, in order, to the appropriate records in the createMany() method. Works great.. but generates like 10k requests.. Yuck.</p><p>Instead now I&apos;m doing bulk inserts for <em>both</em> parts.. making use of lodash&apos;s <code>chunk</code> and <code>flatMap</code> though with node 11 or newer flatMap is available natively now...<br><br>Basically I build up an array of items as I did before, then chunk that into groups of 500, which is an arbitrary selection, but keeps the overall query sizes reasonable as I&apos;m inserting 5 and 8 columns respectively for each of the two tables&apos; queries.<br><br>So now, instead of the bare-minimum bulk insert, I do a bulk insert with actual polygonal data, grab those ids, and feed them in to a second bulk insert into the geometric model&apos;s table. &#xA0;I&apos;m not showing all the fields here, but you get the idea:</p><!--kg-card-begin: markdown--><pre><code class="language-javascript">// Break up our array of items into chunks of 500
const chunks = _.chunk(pois, 500)

// Iterate through each chunk of 500 items
chunks.map(async (chunk) =&gt; {
  const polygons = await Database.raw(`
    INSERT into polygons (geojson, square_meters, metadata, created_at, updated_at)
    VALUES ${Array(chunk.length).fill(&apos;(?,?,?,?,?)&apos;).join(&apos;,&apos;)}
    RETURNING id
  `,
  _.flatMap(chunk, (item) =&gt; {
    item.geojson,
    item.square_meters,
    JSON.stringify({ address: item.address, point: item.point }),
    created,
    created, // Twice because updated_at == created_at on creation.
  }))

  // Extract record ids from the query result
  const polygon_ids = polygons.rows.map(r =&gt; r.id)

  // Time to replace that poorly performing createMany() with some bulk 
  // insertiony goodness...
  await Database.raw(`
    INSERT INTO geometrics (name, audience_id, polygon_id)
    VALUES ${Array(chunk.length).fill(&apos;(?,?,?)&apos;).join(&apos;,&apos;)}
  `, _.flatMap(chunk, (item) =&gt; {
    return [
      item.name,
      audience.id,
      polygon_ids.shift(),
    ]
  }))
})
</code></pre>
<!--kg-card-end: markdown--><p>Ok.. that&apos;s a little involved, lets walk through it. &#xA0;First we split the full array of items into chunks of 500, then iterate through each chunk and insert polygons, returning the created ids.. then take those created ids, and insert values into the DB for the geometric models associating the appropriate polygon id with each.<br><br>The interesting bits here, to me anyway, are the <code>Array().fill()</code> which takes the number of items in the current chunk, and creates an array with that many instances of the value passed into fill(), in this case, the placeholders for each record being inserted. That array is then <code>join()</code>ed with commas to generate the string of placeholders for each row to be inserted.<br><br>The resulting queries end up looking something like this:</p><!--kg-card-begin: markdown--><pre><code class="language-sql">INSERT INTO foobar (foo, bar, baz) 
VALUES (?,?,?),
       (?,?,?),
       (?,?,?),
       (?,?,?),
       (?,?,?)
</code></pre>
<p>Extra linefeeds added for clarity, of course</p>
<!--kg-card-end: markdown--><p>Then the flatMap takes the array of items, and for each returns an array of the data for each of the relevant placeholders &#x2013; which then gets <em>flatenned</em> (hence the name), so that it effectively goes from something like this:</p><pre><code>[
  { color: &apos;blue&apos;, flavor: &apos;raspberry&apos;, id: 1 },
  { color: &apos;red&apos;, flavor: &apos;strawberry&apos;, id: 3 },
  { color: &apos;green&apos;, flavor: &apos;apple&apos;, id: 7 },
]</code></pre><p>To this:</p><pre><code>[&apos;blue&apos;,&apos;raspberry&apos;,1,&apos;red&apos;,&apos;strawberry&apos;,3,&apos;green&apos;,&apos;apple&apos;,7]</code></pre><p>Which is the format that has to be passed in as replacements for the placeholders when running those queries.</p><p>At the end of the day, this change goes from insertion/creation of 1 audience, 10,000 polygon records, and 10,000 geometric records taking a full minute or longer, to completing in about 6 seconds.<br><br>Additionally, it&apos;s far fewer queries to the database (41 if my math is correct, vs 10,002 or so previously), which means far fewer network calls that could potentially fail, and far less likely that the request will simply timeout before completing at all. </p><p>41 still sounds like a lot of course, and it is.. but it&apos;s not even the same sport, let alone in the same ballpark as 10k queries.<br><br>I anticipate utilizing this pattern more often, and while working out how to manage the population of the placeholders, and mapping the data for the replacements presented a bit of a challenge until I thought to use Array fill and flatMap, it&apos;s one of the more interesting solutions I&apos;ve found lately.<br><br>To make a long story short; if you&apos;re inserting more than maybe a dozen or so records, you almost certainly do <em>not</em> want to use createMany() ... whip yourself up a nice performant bulk insertion instead, and avoid the problems poor performance cause before you create them in the first place.</p><p>Given the similarity between Adonis&apos; Lucid ORM and Laravel&apos;s Eloquent, I suspect virtually the exact same solution would also apply to Laravel.. with the obvious PHP syntax changes of course.</p>]]></content:encoded></item></channel></rss>