I prefer facts over opinions, so I measure things and use the resulting numbers as evidence. If I tell a team that a new IDE will increase productivity through an increase in quality, then I’m prepared to measure current quality levels and show after the change that an improvement has been achieved without negatively impacting any other significant metrics. I do this because I’m at the level where businesses ask me to provide a cost/benefit analysis of my investment recommendations (and improvement almost always requires an investment of some sort, even if it’s only in the time required to explain things to the team).
I like to measure everything, so that when we make changes we can see the impact. I like to measure each engineer’s commit frequency, their contribution to the codebase in terms of LOC, the number of defects they fixed and how long each one took, the amount of time spent interacting with the IDE, browser, and other programs, the number of story points completed per iteration, the number of defects found per iteration, word-count of the each story, points per story, defects per story, and many, many, others. I collect all this data with the aim of using it to assess the impact of changes to the team. If a developer joins the team (or leaves) then I have metrics to report the impact to the product owner. If we change IDE then I can see the impact on the numbers. I can assess the impact of changes. It is always important to correlate the numbers with the opinions of team.
Metrics have gotten themselves a bad name because of a couple of horror stories from the distant past. However, they remain the only way I can justify calling myself an engineer, and the only way I know that I can improve.