I have had the same problem in applications of all sizes. The build, test, pull-request, deploy, test again, and promote lifecycle is absolutely mind-numbing, and each step takes forever.
Being able to hammer on save and see changes immediately render is extremely important to me when I'm keeping my flow with work. This is an important benefit of Replit, Heroku, React Native, and other lightweight applications and clouds. These examples show how upfront complexity is reduced through some sort of encapsulation technology.
The next cloud
I only use cloud services with instant deployments. If it takes longer than restarting a daemon or installing some packages, it's really going to kill my flow.
I love immediate feedback, the feeling that I can readily add new features and see the changes work the first time. I also like building features all at once and shipping them, it's most gratifying to me.
Heroku is great on paper, but at Replit at least, we're not using it the way we really want to; our test suite alone takes 15 minutes to run for our Next/Express app and builds take 5-10 minutes. Wait, why are we reinstalling 5k different node depedencies on each deployment?
The product we're building at Replit is what brings the cloud down to earth.
If a particular app, feature, or infrastructure piece takes more than 10-20 seconds to build, test, or deploy, or if it uses completely different build technologies, I make sure it has a separate deployment lifecycle than the rest of the application. A git repository and separate server/cloud pipeline is sufficient.
The more bits and pieces that have to be built to make a change in a single part of an application, the more likely something is going to break in unsurprising ways due to the sheer amount of surface area.
Each service or framework in your app is best in its own project and deployment lifecycle; on Replit, we call this a Repl: an instant-run application environment in any language on our cloud.
I learned some important lessons in the embedded world about how, even when building hardware, you can think of devices, web applications, daemons, and other tooling as services that interface with underlying data structures. Because each part of the system is a more or less asynchronous, contained, and auto-discovered interface into a data structure, it's easy to think of different devices and blobs as hot-pluggable and instantly deployable.
Having decided that something is a distinct application or service, this is where the need for a service-like architecture emerges. The simple solution is usually to have an application set up a REST API or a socket to communicate with other services. The more routers, VPCs, and garbage in the middle of this, the less fun it is. Name it something, just give it to me and let me send and listen for JSON. I don't want to write 50 other documents, GraphQL, and type signatures than the struct itself describing what the thing I'm sending is and testing whether it's being sent.
When product owners also have direct control and responsibility over a directly owned and managed service or product, they also typically have a better relationship with maintenance, on-call, and inter-service APIs. It's yours, you own it, iterate on it, and can always share a relevant snapshot of where your progress is at, unlike when your work is tied up behind code review and deployment cycles.