Hacker Newsnew | past | comments | ask | show | jobs | submit | tiagogm's commentslogin

You don't need to discuss everything.

The the oh-so-common 2 hour+ full team meeting scoping session, where half the team dosen't care what the other half is talking about since it's irrelevant to them - it tires everyone out and produces very little value for the impression of "we're now aligned".

Things only need to discussed if, when and only by as many people as required, everything else can be followed up later, it's really okay. Personally, I've always found things like 3 amigos to be much more time effective.


In my experience AB testing has a time and place - and that is after a certain level of traffic load and product/feature maturity and only to "validate" certain hypotheses.

For low volumes of traffic AB testing would takes ages to wield significant results and for products still maturing and shaping there is lot of "wisdom of crowds" data already available to help make decisions faster (ie: do you really need an AB test to know offering timely promotion to users helps convert?)

If you got a young product trying to grow, fast, it's a lot more effective to rely on experienced product people and off-the-shelf simple analytics to iterate quickly and to take some bets so one day you get to a point where AB testing "optimisations" starts to make sense.

It's a quite an interesting topic! I agree with you too - A/B test driven sites tends to culminate in terrible "cumulative experience" for users


Plum Guide | Backend Engineers and Frontend Engineers | Fully Remote (Europe & UK Only) | Full-time

Plum Guide is on a mission to solve the uncertainty of travel. We find and curate the best home on the planet, using a mix of humans and machine learning to help our guests find the perfect stay, every time. We’ve been focusing on scaling and launched 150 new locations in the last 6 months.

We are hiring engineers who care about our customers and want to help us scale the company, our engineering team and culture. Our teams are cross functional and highly collaborative, our engineers work very very closely with product, design and data. We are big fans of “DevOps culture” and continuous delivery.

Our stack is always evolving but it’s primarily composed of:

On frontend we are building using React 16+, NextJS, Stitches, Storybook, Nginx, Jest.We are also working with designers to shape our in-house design system.

Frontend (Senior) spec: https://boards.greenhouse.io/plumguide/jobs/4584491003 Frontend (Mid) spec: https://boards.greenhouse.io/plumguide/jobs/4584490003

On the backend we are building a microservices architecture primarily using c# (.net core), SQL (& NoSQL), Redis, Docker, Kubernetes, Pulumi.

Backend (Senior) spec: https://boards.greenhouse.io/plumguide/jobs/4584486003 Backend (Mid) spec: https://boards.greenhouse.io/plumguide/jobs/4584478003

You can read a bit about us on our blog (https://medium.com/plumguide) You can find all our job listings here (https://boards.greenhouse.io/plumguide)


Kanban lends itself well to this.

Breaking down work into similarly sized tickets/units can, over time, be used to predict delivery/capacity (which one can use Cycle time to calibrate)

Even neater with enough data it becomes possible to use Monte Carlo simulations to give you confidence intervals on how much can you do or how long you will take to do X amount of work.

https://kanbanize.com/kanban-resources/kanban-analytics/mont...

I find this approach a lot less time consuming, more predicable and reliable.


    Breaking down work into similarly sized tickets/units can,
    over time, be used to predict delivery/capacity
IMO "can break up work into similarly-sized units" is equivalent to "can estimate accurately".

Re: that article - I can't imagine many things LESS accurate than "we have 104 tasks on the board and each team member's cycle time is 2 days so we can finish all the tasks with 10 people working for 20.8 days". Yeah, it makes for a nice graph - but it omits important details like dependencies...


That's not how it works, though - you never deal in terms of individual team members. The team is the unit of delivery. Otherwise you end up with people who should know better putting more people onto teams to "help".

Teams above a certain maturity level do often settle on a certain number of delivered tickets per month, and when you're looking at that sort of resolution, dependency problems and other factors like those you mention are represented in the data. It's not so much a measure of how productive the team is, it's a measure of how much work the team can get done embedded in the organisation they're in, which covers off their ability to resolve blockers and communicate with other teams.

There's a very different cognitive framing if you count tickets, too: you're not saying to the team "come up with a number, you're going to get shouted at if it's wrong, and you've only got 10% of the relevant information to hand", you're saying "do your usual design process, and we'll use the output to make a projection based on the history." Functionally it might be equivalent to "can estimate accurately" but it doesn't work like that when you're the one in the hot-seat.


True, I do find it easier and a less time consuming to break a large piece of work in around 2day chunks (for example) than using different sizes or trying to decide if something is 3 or 5.

In the end, regardless of whatever scoring strategy you use it should always be team centric, rather than individual.

It is possible to say, over the past 6 months, and X tickets, our team has had a Median/Avg cycle of around 2days. If team breaks future work in similar sized chunks, it can very fairly confidently predict how long they will take to do X more tickets, assuming similar conditions.

The added benefit of using small chunks of time, is that one does not need to be super accurate (in most scenarios), it can be 1 or 4 days, all it matters is it's possible to give window of estimation (based on actual data, not guesses) with a certain degree of confidence. (which will naturally become even more consistent over time)


> IMO "can break up work into similarly-sized units" is equivalent to "can estimate accurately".

Yeah, agreed. What I've always seen is "break up work into logical units, ideally as small as possible" which always ends up with a mixture of tickets of different sizes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: