Polling: A No-Code Legacy That Must End

There was a time when no-code platforms could not sense a modified record and inform other applications that something in its data had changed. These were immature architectures hurried into production without anticipation of success and despite being created at the apex of the API economy.

Lacking features like an events API to construct automated broadcasts of data change, integration, and automation tools like Make and Zapier were forced to poll every few minutes to see if anything was new or updated. Polling uses the public REST APIs offered by no-code platforms. Generally speaking, these APIs were poorly designed afterthoughts intended to satisfy a thirst for data integration. Luckily, SmartSuite's API is an exception; my experiments thus far indicate it was carefully designed. But even if true, polling through any API will likely create voluminous request pressure that will not scale.

Polling is Grossly Inefficient

It requires the automation process to capture a snapshot of the data and then use an API call to compare it every two minutes to see if it's different. There is no equivalent to HTTP's concept of a conditional GET, which provides an API to know if a web page has been updated without performing a full request of the page. And yet - almost every scenario built with these tools is based on polling. If you create a Zapier recipe conditioned on change in a SmartSuite table, I am almost certain a polling relationship with SmartSuite's API will be established.

ARCHITECTURAL UNCERTAINTY: I hope the integration layer that SmartSuite has created allow Make, Ply, and Zapier to work with your data using an events architecture. 🤞

In automation, one of the basic rules is to remove unnecessary requirements (make requirements less dumb). Polling is one of the dumbest.

2018: No-code Automation Requirements

  1. Take a snapshot of the data table

  2. In two minutes, take another snapshot of the table

  3. Test each field to see if any of the values are different

  4. If any field (we care about) has changed, take a new snapshot and do something

  5. Goto #2, rinse repeat, ~720 times per day, ~262,800 times per year

2023: No-code Automation Requirements

  1. Listen for change events that occur in a table

  2. Do something, ~22 times per day, ~8,030 times per year.

There are bazzilions of automation recipes doing it the 2018 way. It's not sustainable. It needs to change; you probably have the tools to do it, and I hope many recipes are already shifting toward an event architecture.

But if you don't, a new crop of tools is [seemingly] coming to your rescue. I say "seemingly" because I believe it must also use a fair amount of polling to get the job done. That job is monitoring automation health, and Operator is the first no-code DevOps platform designed to help you in many ways. But it doesn't eliminate the inefficiency of polling. It tells you only when it fails and assists you in managing and updating automations.

Remedy? Perhaps.

I like the positioning of Operator and its noble quest to rescue us from the insanity that surrounds no-code solutions of substance. I've feared the deep and sometimes irrational dependencies on glue factory automation tools since I had to help a client unwind a massive collection of 320 Zaps. It was a decoupling driven by VCs who instinctively knew that in their best interest, deep dependencies on two no-code platforms prohibited any chance of corralling the IP of the solution.

It's no secret - I'm a fan of code, owning your destiny, and capturing intellectual property in software. I've tried to rationalize the use of these application adhesives, but I remain skeptical and advise clients to narrowly define cases where such approaches are in their best interests.

As to tools like Operator, I'm concerned it may create Heisenbugs, a type of bug that occurs only when you observe a process. Named after Heisenberg, a Heisenbug expressed in the most general terms that the very act of measurement or observation directly alters the phenomenon under investigation.

This is to say that if you use enough energy (compute cycles) to observe another process, you will alter the outcome of that process. This is a common understanding among real-time system analytics designers who use "out-of-band" monitoring, which attempts to remove as much interference as possible in capturing data about the target process.

Operator must do this, but can it really?

It must use the Zapier API (they're developing the Make version). I'm open-minded, so I welcome comments from Phil [Lakin] and his team. I'm anxious to learn more about the underlying architecture of Operator.

Circling back to the polling legacy and dumb requirements, one must ask - as cool as Operator may be, are we implementing monitoring processes for automations that include requirements that should not exist in the first place? Is Operator itself another polling system that will increase pressure on an already strained API ecosystem?