首頁
查看“BarrowsBarnhart29”的源代码
←
BarrowsBarnhart29
跳转至:
导航
、
搜索
因为以下原因,你没有权限编辑本页:
您所请求的操作仅限于该用户组的用户使用:
用户
您可以查看与复制此页面的源代码。
We realize you've experienced this. Let us say some new functionality was just added by you in to your application, and you run a new build. And let's say that 50% of one's test cases fail. What's the very first thing you assume? We've asked as our "teaser pitch" this same issue last winter to 100 developers and QA professionals who walked up to our unit at a recently available meeting, and 95 of them had the same answer! The tests must be br... Better than fail to really fail to fail for real. Huh? We know you have experienced this. Let us say you just added some new functionality into your software, and you run a new build. And let us say that 50% of one's test cases fail. What is the very first thing you think? We've asked as our "teaser pitch" this same problem last cold temperatures to 100 developers and QA professionals who went around our booth at a recently available meeting, and 95 of them had the same answer! The tests must be broken! This produces a cascading group of bad assumptions that'll make your manager repeat the adage about "ASS out of U and ME" on the whiteboard at another project meeting. Here is why. * You believe that the issue is not with your program, it is with the test cases themselves being broken or no longer appropriate. * Which means you spending some time evaluating the test cases with whatever changed in your build. * You then look to the test programs to try to figure out why the test situation isn't any longer passing, and change them until they move. * Or you merely stop trying and take to verifying by clicking through your old Word record test cases. Fun active work. How will you possibly call this assessment? In place of using the test to validate the application, you're using the application to test the test situation - which is a plan you numbered! Yes, model tests are very important for finding structural bugs in your code. But once a system test tries to get beyond that granular level of screening, it becomes another sensitive plan in your development environment. It's outrageous to assume that depending on coded system test cases alone offers any value to you in practical testing. Actually, the entire process is so manual and highly ineffective, if you are doing anything more than making busy work for your own personal team that you wonder. Device assessment has its limits. You can find techniques people have tried to obtain beyond these limits, nonetheless it is similar to challenging the theory of gravity. * Wanting to code for reuse - may seem possible but can only help you to the edge of Unit testing's limits. * Wanting to test the UI with your QA class, does not actually work when you can not see those middle and back-end sheets. Why is fake failures therefore dangerous? Aside from the fact that they are a morale vampire that can make the group give up testing, fake failures impact the general performance of testing. What do you really learn from assessment, if an a deep failing test situation is even valid if you do not know? It is like evidence that never is never gathered by a detective. Time for you to declare war on false problems. [http://www.toponesuccess.com/ jt foxx]
返回至
BarrowsBarnhart29
。
导航菜单
个人工具
登录
命名空间
页面
讨论
不转换
不转换
简体
繁體
大陆简体
香港繁體
澳門繁體
大马简体
新加坡简体
台灣正體
视图
阅读
查看源代码
查看历史
更多
搜索
导航
首页
最近更改
随机页面
帮助
工具
链入页面
相关更改
特殊页面
页面信息