Full Stack React Engineer (AI-Native, React + Node.js)
Upwork

Remoto
•Hace 1 dia
•Ninguna postulación
Sobre
We're building a conversion analytics and A/B testing platform that marketers can actually use without engineering teams. Our customers run experiments, track conversions, and optimize funnels - all without tickets to dev teams or waiting for sprints. The product is live with paying customers. We're scaling from startup traction to sustainable growth, which means: fixing what breaks, shipping what matters, and building infrastructure that doesn't collapse under load. We need an engineer who thinks in systems, debugs with precision, and ships clean code fast. You'll use AI tools heavily - not to avoid thinking, but to move faster on problems you understand deeply. This isn't a "build from scratch" role. You're jumping into a real codebase with real users and real constraints. Your job is to make it better while keeping it running. What you'll actually do Feature development (40%) 1. Build new analytics features: custom event tracking, funnel visualization, revenue attribution 2. Implement A/B testing infrastructure: variant targeting, statistical analysis, reporting 3. Create dashboard components and data visualization tools 4. Ship customer-facing features that process millions of events Production support (30%) 1. Debug user-reported issues using Sentry traces, application logs, and analytics data 2. Trace bugs through the full stack: React → Express → Tinybird/ClickHouse 3. Fix performance bottlenecks in data queries and rendering 4. Handle edge cases in tracking scripts running on customer sites System improvements (30%) 1. Refactor brittle code into maintainable patterns 2. Add proper error handling and observability 3. Optimize database queries for scale 4. Improve testing coverage on critical paths 5. Document complex systems for future engineers Stack 1. Frontend: React.js (hooks, context, custom components) 2. Backend: Node.js + Express (REST APIs, webhooks, background jobs) 3. Data: Tinybird (ClickHouse) for analytics, Supabase for application data 4. Infrastructure: Vercel, Railway, Cloudflare 5. Tools: Cursor, Sentry, Linear, Slack, GitHub Requirements 1. 4+ years React + Node.js in production environments 2. Deep Cursor proficiency: context management, multi-file edits, prompt engineering for complex refactors 3. Can read unfamiliar codebases and trace bugs through distributed systems 4. Understands observability: logs, error tracking, performance monitoring 5. Experience with analytics pipelines, event streaming, or data-heavy applications 6. Clear written communication for async collaboration 7. Comfortable owning features end-to-end: design discussions through deployment You'll succeed here if you: 1. Can jump into a bug report, reproduce it locally, and ship a fix same-day 2. Know when to refactor vs. when to ship a pragmatic solution 3. Write code that other engineers can maintain six months later 4. Use AI to accelerate, not to avoid understanding what you're building 5. Ask clarifying questions before building the wrong thing 6. Can explain technical tradeoffs to non-technical founders Red flags we avoid 1. Heavy reliance on AI without understanding output 2. Can't explain architectural decisions or debug generated code 3. Needs hand-holding on tooling setup or environment configuration 4. "Move fast, break things" mentality (we have paying customers) 5. Poor async communication or goes dark for days Work structure 1. 15-25 hours/week to start (potential to scale up) 2. Fully remote (Eastern Europe or South America preferred for timezone overlap) 3. Async-first: work when you work best, sync 2-3 hours with NA timezones 4. Contract basis with potential to grow into larger role as company scales 5. Direct collaboration with founders (no middle management, no bureaucracy) Why this matters 1. Real product with revenue and growth trajectory 2. Your code directly impacts customer outcomes 3. Technical decisions influence product direction 4. Work with a small, senior team that values craft 5. Opportunity to shape engineering culture as we scale To apply - REQUIRED Record a 10-15 minute screen recording of yourself using Cursor to either debug a real issue or build a small feature. This is non-negotiable and the primary filter for this role. What to record: Option A: Debug a bug Take a real bug from a codebase you have access to (or create a complex one in a demo project) Show yourself: 1. Reading the error/issue description 2. Setting up context in Cursor (which files, what background) 3. Using Cursor to trace the problem 4. Explaining your thought process out loud as you work 5. Testing and verifying the fix Option B: Build a feature Pick a small but non-trivial feature (e.g., "add filtering to a data table", "implement user authentication flow", "create a chart component") Show yourself: 1. Planning the approach 2. Setting up Cursor context properly 3. Writing prompts that get useful responses 4. Reviewing and modifying generated code 5. Testing the implementation What we're evaluating: 1. How you structure prompts (vague vs. specific, context-aware vs. blind) 2. Your ability to review and critique AI-generated code 3. How you handle when Cursor gives you wrong or incomplete answers 4. Your debugging process and systematic thinking 5. How you verify your work actually works 6. Your explanation of tradeoffs and decisions Technical requirements: 1. Record with Loom, Screen Studio, or similar (include audio of you explaining) 2. Show your full screen including Cursor interface 3. Talk through your thinking - we want to hear your problem-solving process 4. Don't edit out mistakes - we want to see how you recover 5. Upload unlisted/private and send the link What NOT to do: 1. Don't record yourself following a tutorial 2. Don't stage a fake "bug" with an obvious solution 3. Don't submit a recording where you're just accepting Cursor suggestions without review 4. Don't skip the audio explanation - we need to hear how you think Why this matters: 1. Whether you understand the code Cursor generates 2. How you handle context and multi-file changes 3. Your actual debugging methodology 4. If you're enhancing your skills with AI This takes 15 minutes to prepare and record. If that feels like too much effort, this role isn't for you.





