Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
NOW LIVE! Today's the day you meet your new best friend. You don’t have to leave Wolfy behind... In 'Pets & Sidekicks' your companions level up with you!
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Umbran" data-source="post: 9699474" data-attributes="member: 177"><p>Well, that heart-lung machine I mentioned earlier - do you figure the <em>only</em> tests ever done on it were directly in surgery? And that when they began tests in surgery, they just sold them to whoever and waited for someone to complain? Of course not! That thing was tested in its various individual parts, then the the thing was tested on liquids the same viscosity as blood. Then they put animal blood in it, then human blood - all while attached to machines that monitor the temperature, pressure, oxygenation and so forth. The blood will be examines after being run through the machine, to make sure the cells are undamaged.</p><p></p><p>Putting a human whose life <em>depends on it</em> is only a final stage, and that is done in carefully controlled trials, with backups if something goes wrong. And the results of all this is reviewed by outside experts, who have no financial stake in the results, before the machine will be certified for general use in hospitals.</p><p></p><p></p><p></p><p>You had very specific questions, like "What counts as pass/fail", which I cannot answer in a general sense. I do software project management. I can only speak to software QA broadly, and highly simplified manner.</p><p></p><p>For example, you don't make an entire application, and then throw it out to the market untested and wait for users to complain about errors. </p><p></p><p>Testing software is a multi-faced, multi-layered thing. There is testing done by developers to make sure smaller sections of code work as expected (often called "unit testing"). There is testing on a larger scale, that checks to see if separate parts of an application interact as expected (often called "integration testing", "system integration testing" or "SIT testing"). Then there's testing in which we check to see that what results the end-user gets are what are expected/desired (usually referred to as "functional testing").</p><p></p><p>Also, software has multiple types of environments it can exist in. There are development environments in which developers work, that are highly dynamic and change rapidly as engineers make changes to get things to work. There are QA environments in which most SIT and functional testing happens. There's "staging" environments that are the place software goes (and can again be tested) that are typically as much like the environment the public sees as possible, and then finally there's "production", which is where you and I see it, available to the public.</p><p></p><p>You don't generally test in production. End-users are already getting at it there, and any problems found there are errors that end-users see and are impacted by, and thinking horrible things about your company as they go wrong. You always want to find errors before software gets to production.</p><p></p><p>Testing is not "now start using it randomly and report when something goes wrong". QA professionals are exacting, and methodical. They write hundreds, thousands, and tens of thousands of test cases, checking hundreds, thousand, and tens of thousands of individual behaviors of the system. For a big system, if you are serious, those human-written test cases are fed into a system that automates executing the tests and checking if the result matches what the QA engineer said it should, and marks the test as failed if it doesn't. That defines a bug, that gets handed back to the developers to fix. Lather-rinse-repeat until the tests all pass.</p><p></p><p>Where do those tests cases come from? People who make software have product managers who define what the software is supposed to do. For, say, an application that's supposed to support a doctor in diagnosing ailments, they'd define what ailments are in the list that the system is supposed to be able to catch, and upon what basis they are to suggest a diagnosis. QA will test whether the system gives the right results, or wrong results.</p><p></p><p>There should be no reason to do all this testing in a "live", meaning a production, system. You do it back behind the scenes in a controlled QA environment, with databases just like you would have in production, and so forth.</p></blockquote><p></p>
[QUOTE="Umbran, post: 9699474, member: 177"] Well, that heart-lung machine I mentioned earlier - do you figure the [I]only[/I] tests ever done on it were directly in surgery? And that when they began tests in surgery, they just sold them to whoever and waited for someone to complain? Of course not! That thing was tested in its various individual parts, then the the thing was tested on liquids the same viscosity as blood. Then they put animal blood in it, then human blood - all while attached to machines that monitor the temperature, pressure, oxygenation and so forth. The blood will be examines after being run through the machine, to make sure the cells are undamaged. Putting a human whose life [I]depends on it[/I] is only a final stage, and that is done in carefully controlled trials, with backups if something goes wrong. And the results of all this is reviewed by outside experts, who have no financial stake in the results, before the machine will be certified for general use in hospitals. You had very specific questions, like "What counts as pass/fail", which I cannot answer in a general sense. I do software project management. I can only speak to software QA broadly, and highly simplified manner. For example, you don't make an entire application, and then throw it out to the market untested and wait for users to complain about errors. Testing software is a multi-faced, multi-layered thing. There is testing done by developers to make sure smaller sections of code work as expected (often called "unit testing"). There is testing on a larger scale, that checks to see if separate parts of an application interact as expected (often called "integration testing", "system integration testing" or "SIT testing"). Then there's testing in which we check to see that what results the end-user gets are what are expected/desired (usually referred to as "functional testing"). Also, software has multiple types of environments it can exist in. There are development environments in which developers work, that are highly dynamic and change rapidly as engineers make changes to get things to work. There are QA environments in which most SIT and functional testing happens. There's "staging" environments that are the place software goes (and can again be tested) that are typically as much like the environment the public sees as possible, and then finally there's "production", which is where you and I see it, available to the public. You don't generally test in production. End-users are already getting at it there, and any problems found there are errors that end-users see and are impacted by, and thinking horrible things about your company as they go wrong. You always want to find errors before software gets to production. Testing is not "now start using it randomly and report when something goes wrong". QA professionals are exacting, and methodical. They write hundreds, thousands, and tens of thousands of test cases, checking hundreds, thousand, and tens of thousands of individual behaviors of the system. For a big system, if you are serious, those human-written test cases are fed into a system that automates executing the tests and checking if the result matches what the QA engineer said it should, and marks the test as failed if it doesn't. That defines a bug, that gets handed back to the developers to fix. Lather-rinse-repeat until the tests all pass. Where do those tests cases come from? People who make software have product managers who define what the software is supposed to do. For, say, an application that's supposed to support a doctor in diagnosing ailments, they'd define what ailments are in the list that the system is supposed to be able to catch, and upon what basis they are to suggest a diagnosis. QA will test whether the system gives the right results, or wrong results. There should be no reason to do all this testing in a "live", meaning a production, system. You do it back behind the scenes in a controlled QA environment, with databases just like you would have in production, and so forth. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Geek Talk & Media
Judge decides case based on AI-hallucinated case law
Top