Learn how assuming variability and preserving options helps unlock the value of data, how to quantify the impact of choices in SAFe. We’ll also answer questions around how deep to go with WSJF and using monte carlo simulations to predict epic completion rates.
SAFe in the News
Unlocking the true value of data: Choosing the right project delivery approach is key for big data project success
By Windsor Gumede, Director Technology, Kaiser Permanente
SAFe in the Trenches
Hear Joe share insights on his talk at this year’s Global SAFe Summit. The title of the talk was Strengthening SAFe’s Use of CoD and WSJF and suggested ways to improve the economic choices by quantifying impacts in dollars.
To watch Joe and Don’s presentations, as well as other presentations from the Global SAFe Summit, visit global.safesummit.com/presentations (Videos will be available after Nov. 15)
The Audio Community of Practice section of the show is where we answer YOUR most frequently asked and submitted questions. If you have a question for us to answer on air, please send it to firstname.lastname@example.org
The two questions we answer in this episode are:
- When standing up a team, the entire backlog of features are WSJF’d in order to determine priority. But the SAFe materials don’t seem to have a conclusive approach for subsequent scoring. Do your organizations do a full WSJF of all features on the backlog as part of PI prep? Or is a WSJF score given to each feature as part of the ongoing grooming at the program level (if so, how is effort defined? By the EAs? ?). Is reviewing WSJF for every feature on every team board realistic?
- Does anyone have insight into the logic/calculations used for monte carlo simulation in relation to historical velocity of the teams to predict the completion of the epics based on the forecasted points by team.