Note: This post is backdated to the date of the last draft (27 Dec 2021) as I changed my job and role and didn't want to bias / inform myself by that. It's an unfinished fragment of my thinking at that point in time that I just cleaned up a little and added references where necessary, but it's still rough and incomplete.
I've never been happy with the Tech interview process and burned by it many times - being under-levelled, in role I barely understood (a reason I launched the Tech Job Titles list), rejected for algorithm questions ("write a solver for Tetris but in n dimensions"), or simply not even screened for "no product experience". This form of gatekeeping in the tech industry is one of my pet peeves, but it's also simply unproductive and inefficient. It only works because of survivorship bias for people from top universities who are being prepped with special courses, books, test interviews and special coaching by tech firms - as such it functions more as a ritual than actual role fit and/or culture add. It is basically a code (e.g. speaking out loud during programming), and by knowing that code the interviewee signals the interviewer to be part of the same group ("likeability").
My interview style
I've done about ~250 tech interviews at Google and ~250-500 at Accenture, plus quite few in non-for-profit volunteering and my own startups - regardless whether the role was called programmer, engineer, consultant or architect. Instead of going into depth on what the current interview process is or what's broken with it (the list has a few pointers and there is great research by others), let me highlight what is important for me in tech interviews (leaving behavioural and hypothetical aside):
Standardized questions and structured answer guides
- Short, easy to understand and free of bias or misunderstanding - a 45min interview should focus on solving, not understanding the problem. In particular do not use jargon ("reverse a linked list"), slang ("let's solve this IRL"), in particular not tech slang that excludes non-traditional backgrounds ("how would you disrupt Netflix?") language that's hard to understand for non-native speakers on a call ("given a parallelogram with dimension a 15 inches and h 50 inches..."), US-focused cultural references ("build a system to optimize the order of the NBA draft") or brainteasers because they require a certain education (e.g. a western-style MBA). A good starting point are real world problems and common things around us globally.
- Open and helpful - the goal of an interview is not to stun a candidate or impress them with the smartness of the interviewer. Questions should "fan out" an possible narrative through a question like a tree and allow to go into different directions. A good answer guide has nudges and hints in different degrees, and links to the rubric whether or how a hint does influence the final rating. Building a reasonable hypothesis in ambiguity (not knowing the right answer immediately) and validating it is itself an important skill.
- Versatile and flexible - the question should cover a broad range of skill / experience levels and can be used with slight modifications in different rounds or different roles (see below). In particular the question should allow for drill-downs or follow-ups that probe deeper yet still giving the candidate a good experience and getting some signal even if the original answer or direction was not optimal. The answer guide should rank answers by skill / experience level, not just give an ideal answer. An ideal question is extremely simple and gets quickly more complicated with each drilldown (the easiest drilldown is "Why?"). Facts should be the nodes of the answer search tree not the root, e.g. if the first question is "How would you build a messaging app like WhatsApp or iMessage?" at the end if could be "What's a reasonable SLO for a message to show up at the receiver?". The only exception are some easy fact questions in the beginning to verify a common basis and ease into the interview.
- Able to add more attributes e.g. cognitive ability - for some roles or levels not only the "what" but the "how" becomes important, if the candidate takes clear assumptions or decisions and scopes the answer properly for instance. The answer guide should allow drill-downs for that, easy ones are role games e.g. "How would you explain this approach to a less experienced peer to implement?"
- Clear grading rubric that shows expected skill and level and outcome with example answers e.g. in a system design question one particular sub-skill or attribute might be "concurrency" and another "reliability". If the role expects competent in concurrency and proficient in reliability architecture it should be easy for the interview to gauge the level out of the answer guide grading rubric in the context of the question e.g. "Concurrency: Competent if speaking about (lightweight) threads or processes and distributed systems without going into consistency models e.g. synchronization, consensus". Usually this requires some general definition of those skills or experiences across all roles and a role ladder. However, innovative and original answers should be encouraged and feed back into the answer guide.
Managing Complexity
For example, algorithms and data structures are important but I usually look for the why, not pattern recognition - I don't care how easily a candidate can "crack" to identify whether the problem can be solved with a boilerplate tree, hash table or dynamic programming solution and hack that down in 10 minutes in a coding interview. I do fully recognize research shows knowing algorithms is a good predictor of job performance because of the ability to learn, but I apply that insight to types of problems, not memorization. For instance, realizing a problem is complex (e.g. NP complete) and discussing options to approximate or simplify a solution with good real-world assumptions is more impressive than coding the standard solution. For that reason I prefer code reading and improvement questions - more below.
The Scope - Constraint - Focus model
I quite like take-home or homework or pair programming questions but they can introduce bias for certain languages, frameworks of paradigms. Worse though, especially for Google there is an industry of coaching that goes to the extent of fraud, and questions are frequently leaked which requires constant monitoring and adjustment. I personally had a lot of first hand experiences with other people in the room, electronic help, memorized answers and trying to write down sessions or recording of the interview. This challenge has to be balanced with the need for standardized questions - writing good answer guides and rating rubrics is hard and having to throw them away is frustrating.
A good middle ground between homework and live coding / whiteboarding (which can be quite stressful and simply not realistic but may be necessary to validate an answer sadly) is coding reading and improvement. You could provide a piece or code, design or data upfront, usually 24 hours, and the candidate has to come up with improvements, usually some on their own and some based on system properties (e.g. "How would you make this 2x faster"). I usually prefer homework that needs a good discussion and explanation and a little bit of live improvement in a shared document. That can be adding / fixing code, drawing a diagram differently or doing an analysis on data (e.g. "what data would have to be cleaned up first?"). If necessary the code reading can also be live, however properly understanding a piece of code or system can easily take 10 minutes which you won't get back. Needless to say coding interviews should allow any common language, never test syntax or conventions and generally not allow helper libraries (rather simplify the problem than require libraries that exclude some candidates).
Going deeper into "versatile and flexible" from above: Over time I've developed some frameworks to create new questions that can share portions of answer guides and rubrics, I call that the Scope - Constraint - Focus model. I know frameworks are biased too (the STAR model has lead to highly ritualized behaviour questions for instance, which reinforces the "code" problem) but I feel this is flexible enough. A good example are NALSD questions:
- Scope is the problem itself, the story. While the question should define a rough context / boundary, managing the scope within the timeframe for the answer is also a great way for a candidate to show skill and employ nudges, so scope management (e.g. assumptions, preparation) should be possible. For instance "Design an air traffic control system".
- Constraint adds a (varying) degree of complexity, from SCQA. It tests for application of experience, intuition, risk approach, tradeoffs - as we say architecture are the significant decisions. Constraints are the easiest to tweak for different skill or role levels and can make a question easier or harder. For instance "Design an air traffic control system for a regional airport so that information is never older than 200ms".
- Focus allows for better time management and specific terms for the role. It's optional if the goal is to actually test scope and time management, but important if facts or drill-downs are important. For instance "Design the high-level components of an air traffic control system backend for a regional airport with a data freshness SLO of 99.99% consistent within 200ms". However it's important not to get into random details that add noise instead of reducing it.
Testing for Experience
The Integration Engineer role
As highlighted in my list of Tech Job Titles (GitHub), some roles are especially ambiguous because they depend a lot on the org structure the role - I had mentioned the Solution Architect. Here I want to give one more example, the "Integration Engineer". It may also be called Deployment Engineer, Migration Engineer, Field Engineer, Delivery Architect, Technology Architect, Customer Solution Engineer, Implementation Services, System Engineer, Solutions Engineer or basically anything in Professional Services. In my last role in Google (see Note on my role change above) it was called "Strategic Cloud Engineer" (SCE) or simply "Cloud Data Engineer" or "Cloud Infrastructure Engineer".
The speciality about the Integration Engineer role is that it's an engineering role, but it requires more empathy, particularly cognitive empathy, because it's someone else's system. And that system exists - it's not legacy, it's heritage, and changes might have surprising, unknown side effects. In that regard Integration Engineers are very similar to SREs, just that SREs focus on internal customers, whereas integration engineers focus on external ones. SREs therefore have a clearer framework and metrics, SLOs for instance, whereas integration engineers also need to be flexible with processes and communication standards, and often literally translate or bridge cultures. Both deeply care about the system, want to improve it and may often be "on the hook" for it, yet only have influence over it, not control, power or authority. Both SRE and integration engineers are masters in observability and legibility of sociotechnological systems.
So how would I hire Integration Engineers ideally (this is not a hiring guide for the SCE role, and the process is very different)? Trying to combine as many independent factors as possible:
- Greate pre-screening by recruiters that identifies strengths, options and relevant focus areas for drill-down, especially for a first conversation (e.g. tell me the hardest bug you solved which made you happy)
- Design homework: Drawing up a basic integration with a clear constraint e.g. single-threaded single system. Then changing the constraint in the interview so the context / scope is clear and doesn't change e.g. to a serverless system in the cloud.
- Data analysis homework: Coming up with certain insights and discussing them in the interview, drawing conclusions out of those, for instance how certain features are used or what values are present in an API or integration.
- Coding homework with an existing integration changing it "roughly" - for instance from an existing version 1 of an API to a provided version 2, that requires some data wrangling. It could be followed by live pair programming to make the change production ready.
- Live system design questions around rolling out such changes. In particular what questions to ask the other side and how they influence the solution, how to prioritze requirements and estimate integration patterns - potentially even a role play with a customer who has certain requirements.
- Finally behavioural and hypothetical questions about past experiences. Campfire war stories are always great insights but also relaxing and fun and close a set of interview rounds niceley.
No comments:
Post a Comment