Discussion:
How and Where to start a software develop?
(too old to reply)
Claudio Santos
2012-06-21 17:22:44 UTC
Permalink
Hi,

I'm reading the book, and the main doubt is, how to start a software
development?

I would to know if I must start to write a test using selennium framework,
because the systems that I developed ever satisfy an users that need to put
some information through some text boxs.

Is this a main rule that all of us must to follow?

Tks
Steve Freeman
2012-06-22 08:32:23 UTC
Permalink
> I'm reading the book, and the main doubt is, how to start a software development?
>
> I would to know if I must start to write a test using selennium framework, because the systems that I developed ever satisfy an users that need to put some information through some text boxs.

that might be a good start. What would you like to show that the system can do?

> Is this a main rule that all of us must to follow?

there are few rules that everyone must follow.
Claudio Santos
2012-06-22 11:47:30 UTC
Permalink
I work with Geographic Information Systems, and most of all system I
develop have a strong link with user interface.

The doubt is: when i said that my system must show a geographic location
through a phonetic search.

I will start my develop, creating a test that valid if a user search, show
the correctly geolocation on the map.

Through the creation of this test, i will create a web page with a test
box, then the search will fail, then i implement a test for phonetic search
until this not fail anymore, after that i will creating my application step
by step, ever failing some functionality on user interface.

Is this the right concept?

About the rules I have mentioned, i believe that most of then are around
the best pratices to develop Object Oriented SW, ever considering the
cohesion and coupling.

Very tks for the answer...


On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>
> > I'm reading the book, and the main doubt is, how to start a software
> development?
> >
> > I would to know if I must start to write a test using selennium
> framework, because the systems that I developed ever satisfy an users that
> need to put some information through some text boxs.
>
> that might be a good start. What would you like to show that the system
> can do?
>
> > Is this a main rule that all of us must to follow?
>
> there are few rules that everyone must follow.
Steve Freeman
2012-06-22 12:36:26 UTC
Permalink
That sounds reasonable.

The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system: type in a name, show what happens when a result is or is not found. Then I'd probably have more detailed testing lower down to exercise the components that did the phonetic look up, and the translation to coordinates.

S

On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> I work with Geographic Information Systems, and most of all system I
> develop have a strong link with user interface.
>
> The doubt is: when i said that my system must show a geographic location
> through a phonetic search.
>
> I will start my develop, creating a test that valid if a user search, show
> the correctly geolocation on the map.
>
> Through the creation of this test, i will create a web page with a test
> box, then the search will fail, then i implement a test for phonetic search
> until this not fail anymore, after that i will creating my application step
> by step, ever failing some functionality on user interface.
>
> Is this the right concept?
>
> About the rules I have mentioned, i believe that most of then are around
> the best pratices to develop Object Oriented SW, ever considering the
> cohesion and coupling.
>
> Very tks for the answer...
>
>
> On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>
>>> I'm reading the book, and the main doubt is, how to start a software
>> development?
>>>
>>> I would to know if I must start to write a test using selennium
>> framework, because the systems that I developed ever satisfy an users that
>> need to put some information through some text boxs.
>>
>> that might be a good start. What would you like to show that the system
>> can do?
>>
>>> Is this a main rule that all of us must to follow?
>>
>> there are few rules that everyone must follow.
>

Steve Freeman

Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com

+44 797 179 4105
Twitter: @sf105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677
Claudio Santos
2012-06-22 13:50:15 UTC
Permalink
Yes, will have a detailed test for each functionality in your smallest
part. I get the problem to grow sw out of control guiding the develop
through the requirements, but now I only see this way to create a walking
skeleton and deploy some sw that test each functionality on end-to-end way.


I still reading you book, and the I see this like the best way to fix my
problems, that are around deploy sw without the all test.

tks

On Friday, June 22, 2012 9:36:26 AM UTC-3, Steve Freeman wrote:
>
> That sounds reasonable.
>
> The trick is not to do too much through the top-level tests, otherwise
> they grow out of control. In your case, probably there would be a smaller
> number of top-level tests that show the main paths through the system: type
> in a name, show what happens when a result is or is not found. Then I'd
> probably have more detailed testing lower down to exercise the components
> that did the phonetic look up, and the translation to coordinates.
>
> S
>
> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> > I work with Geographic Information Systems, and most of all system I
> > develop have a strong link with user interface.
> >
> > The doubt is: when i said that my system must show a geographic location
> > through a phonetic search.
> >
> > I will start my develop, creating a test that valid if a user search,
> show
> > the correctly geolocation on the map.
> >
> > Through the creation of this test, i will create a web page with a test
> > box, then the search will fail, then i implement a test for phonetic
> search
> > until this not fail anymore, after that i will creating my application
> step
> > by step, ever failing some functionality on user interface.
> >
> > Is this the right concept?
> >
> > About the rules I have mentioned, i believe that most of then are around
> > the best pratices to develop Object Oriented SW, ever considering the
> > cohesion and coupling.
> >
> > Very tks for the answer...
> >
> >
> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
> >>
> >>> I'm reading the book, and the main doubt is, how to start a software
> >> development?
> >>>
> >>> I would to know if I must start to write a test using selennium
> >> framework, because the systems that I developed ever satisfy an users
> that
> >> need to put some information through some text boxs.
> >>
> >> that might be a good start. What would you like to show that the system
> >> can do?
> >>
> >>> Is this a main rule that all of us must to follow?
> >>
> >> there are few rules that everyone must follow.
> >
>
> Steve Freeman
>
> Winner of the Agile Alliance Gordon Pask award 2006
> Book: http://www.growing-object-oriented-software.com
>
> +44 797 179 4105
> Twitter: @sf105
> Higher Order Logic Limited
> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> Company registered in England & Wales. Number 7522677
>
>
>
>
On Friday, June 22, 2012 9:36:26 AM UTC-3, Steve Freeman wrote:
>
> That sounds reasonable.
>
> The trick is not to do too much through the top-level tests, otherwise
> they grow out of control. In your case, probably there would be a smaller
> number of top-level tests that show the main paths through the system: type
> in a name, show what happens when a result is or is not found. Then I'd
> probably have more detailed testing lower down to exercise the components
> that did the phonetic look up, and the translation to coordinates.
>
> S
>
> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> > I work with Geographic Information Systems, and most of all system I
> > develop have a strong link with user interface.
> >
> > The doubt is: when i said that my system must show a geographic location
> > through a phonetic search.
> >
> > I will start my develop, creating a test that valid if a user search,
> show
> > the correctly geolocation on the map.
> >
> > Through the creation of this test, i will create a web page with a test
> > box, then the search will fail, then i implement a test for phonetic
> search
> > until this not fail anymore, after that i will creating my application
> step
> > by step, ever failing some functionality on user interface.
> >
> > Is this the right concept?
> >
> > About the rules I have mentioned, i believe that most of then are around
> > the best pratices to develop Object Oriented SW, ever considering the
> > cohesion and coupling.
> >
> > Very tks for the answer...
> >
> >
> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
> >>
> >>> I'm reading the book, and the main doubt is, how to start a software
> >> development?
> >>>
> >>> I would to know if I must start to write a test using selennium
> >> framework, because the systems that I developed ever satisfy an users
> that
> >> need to put some information through some text boxs.
> >>
> >> that might be a good start. What would you like to show that the system
> >> can do?
> >>
> >>> Is this a main rule that all of us must to follow?
> >>
> >> there are few rules that everyone must follow.
> >
>
> Steve Freeman
>
> Winner of the Agile Alliance Gordon Pask award 2006
> Book: http://www.growing-object-oriented-software.com
>
> +44 797 179 4105
> Twitter: @sf105
> Higher Order Logic Limited
> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> Company registered in England & Wales. Number 7522677
>
>
>
>
On Friday, June 22, 2012 9:36:26 AM UTC-3, Steve Freeman wrote:
>
> That sounds reasonable.
>
> The trick is not to do too much through the top-level tests, otherwise
> they grow out of control. In your case, probably there would be a smaller
> number of top-level tests that show the main paths through the system: type
> in a name, show what happens when a result is or is not found. Then I'd
> probably have more detailed testing lower down to exercise the components
> that did the phonetic look up, and the translation to coordinates.
>
> S
>
> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> > I work with Geographic Information Systems, and most of all system I
> > develop have a strong link with user interface.
> >
> > The doubt is: when i said that my system must show a geographic location
> > through a phonetic search.
> >
> > I will start my develop, creating a test that valid if a user search,
> show
> > the correctly geolocation on the map.
> >
> > Through the creation of this test, i will create a web page with a test
> > box, then the search will fail, then i implement a test for phonetic
> search
> > until this not fail anymore, after that i will creating my application
> step
> > by step, ever failing some functionality on user interface.
> >
> > Is this the right concept?
> >
> > About the rules I have mentioned, i believe that most of then are around
> > the best pratices to develop Object Oriented SW, ever considering the
> > cohesion and coupling.
> >
> > Very tks for the answer...
> >
> >
> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
> >>
> >>> I'm reading the book, and the main doubt is, how to start a software
> >> development?
> >>>
> >>> I would to know if I must start to write a test using selennium
> >> framework, because the systems that I developed ever satisfy an users
> that
> >> need to put some information through some text boxs.
> >>
> >> that might be a good start. What would you like to show that the system
> >> can do?
> >>
> >>> Is this a main rule that all of us must to follow?
> >>
> >> there are few rules that everyone must follow.
> >
>
> Steve Freeman
>
> Winner of the Agile Alliance Gordon Pask award 2006
> Book: http://www.growing-object-oriented-software.com
>
> +44 797 179 4105
> Twitter: @sf105
> Higher Order Logic Limited
> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> Company registered in England & Wales. Number 7522677
>
>
>
>
On Friday, June 22, 2012 9:36:26 AM UTC-3, Steve Freeman wrote:
>
> That sounds reasonable.
>
> The trick is not to do too much through the top-level tests, otherwise
> they grow out of control. In your case, probably there would be a smaller
> number of top-level tests that show the main paths through the system: type
> in a name, show what happens when a result is or is not found. Then I'd
> probably have more detailed testing lower down to exercise the components
> that did the phonetic look up, and the translation to coordinates.
>
> S
>
> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> > I work with Geographic Information Systems, and most of all system I
> > develop have a strong link with user interface.
> >
> > The doubt is: when i said that my system must show a geographic location
> > through a phonetic search.
> >
> > I will start my develop, creating a test that valid if a user search,
> show
> > the correctly geolocation on the map.
> >
> > Through the creation of this test, i will create a web page with a test
> > box, then the search will fail, then i implement a test for phonetic
> search
> > until this not fail anymore, after that i will creating my application
> step
> > by step, ever failing some functionality on user interface.
> >
> > Is this the right concept?
> >
> > About the rules I have mentioned, i believe that most of then are around
> > the best pratices to develop Object Oriented SW, ever considering the
> > cohesion and coupling.
> >
> > Very tks for the answer...
> >
> >
> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
> >>
> >>> I'm reading the book, and the main doubt is, how to start a software
> >> development?
> >>>
> >>> I would to know if I must start to write a test using selennium
> >> framework, because the systems that I developed ever satisfy an users
> that
> >> need to put some information through some text boxs.
> >>
> >> that might be a good start. What would you like to show that the system
> >> can do?
> >>
> >>> Is this a main rule that all of us must to follow?
> >>
> >> there are few rules that everyone must follow.
> >
>
> Steve Freeman
>
> Winner of the Agile Alliance Gordon Pask award 2006
> Book: http://www.growing-object-oriented-software.com
>
> +44 797 179 4105
> Twitter: @sf105
> Higher Order Logic Limited
> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> Company registered in England & Wales. Number 7522677
>
>
>
>
Rick Pingry
2012-06-23 13:50:04 UTC
Permalink
Thanks Steve, this begins to answer some of the questions I have been
having lately as well. I just watched your "Fractal TDD" video, and it
touched on this quite a bit

On a recent project, I had it quite easy in that I did not have any real
external dependencies to worry about. It was actually a console
application. As described by GOOS, for every new feature I was wanting to
add, I would write an end-to-end test that exercised the feature, and then
I would drive down to where the classes were implementing things all the
way down (like tortoises). I was not sure about this because at each level
I felt like I was kind of duplicating tests. Because there were tests at
each level, I felt like the tests were too coupled with my design.

Then I started thinking that maybe I just needed 2 tests, one on the very
top, an acceptance test that shows an example of feature, then ones right
where there were actually changes happening.

So, first question... Is this right? It felt better. You are saying that
you should just write the full end-to-end acceptance test on a few of the
main paths, like the main "Happy Cases", or the ones that provide the most
risk, then for the others drive just a little deeper so you don't have to
take everything through the brittle outter layers, past your ports and
adapters to what I would call "edge-to-edge" rather than "end-to-end".

On another project I was working on, this was much more the case. I would
actually test through a mocked version of my "port" to test "edge-to-edge".
I wondered at that time how much the interface of my port should resemble
the external resource I was adapting. From what I have seen, you are
saying it should not, it should more reflect the needs of your domain. So,
for my second question...

As I worked this way, I would quite often find code that WANTED to live in
this boundary between my domain classes and my adapters, or in the adapter
itself. This adapter area became dangerous ground. I had a hard time
budding off classes in between in a tested way. I actually struggle with
budding off in-between classes pretty badly in general. All of my tests
seem to be so tied to the interfaces of their collaborators that to bud
something in-between makes everything break. It is probably a more
systemic problem now that I think of it. Anyway, it is a real problem in
particular for this adapter area because as I learn more about the external
resource I am using and what I am required to do with it, code collects in
this adapter layer that seems untestable. Perhaps that is where I need to
be doing "integration tests" that I have been avoiding?

On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>
> That sounds reasonable.
>
> The trick is not to do too much through the top-level tests, otherwise
> they grow out of control. In your case, probably there would be a smaller
> number of top-level tests that show the main paths through the system: type
> in a name, show what happens when a result is or is not found. Then I'd
> probably have more detailed testing lower down to exercise the components
> that did the phonetic look up, and the translation to coordinates.
>
> S
>
> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> > I work with Geographic Information Systems, and most of all system I
> > develop have a strong link with user interface.
> >
> > The doubt is: when i said that my system must show a geographic location
> > through a phonetic search.
> >
> > I will start my develop, creating a test that valid if a user search,
> show
> > the correctly geolocation on the map.
> >
> > Through the creation of this test, i will create a web page with a test
> > box, then the search will fail, then i implement a test for phonetic
> search
> > until this not fail anymore, after that i will creating my application
> step
> > by step, ever failing some functionality on user interface.
> >
> > Is this the right concept?
> >
> > About the rules I have mentioned, i believe that most of then are around
> > the best pratices to develop Object Oriented SW, ever considering the
> > cohesion and coupling.
> >
> > Very tks for the answer...
> >
> >
> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
> >>
> >>> I'm reading the book, and the main doubt is, how to start a software
> >> development?
> >>>
> >>> I would to know if I must start to write a test using selennium
> >> framework, because the systems that I developed ever satisfy an users
> that
> >> need to put some information through some text boxs.
> >>
> >> that might be a good start. What would you like to show that the system
> >> can do?
> >>
> >>> Is this a main rule that all of us must to follow?
> >>
> >> there are few rules that everyone must follow.
> >
>
> Steve Freeman
>
> Winner of the Agile Alliance Gordon Pask award 2006
> Book: http://www.growing-object-oriented-software.com
>
> +44 797 179 4105
> Twitter: @sf105
> Higher Order Logic Limited
> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> Company registered in England & Wales. Number 7522677
>
>
>
>
J. B. Rainsberger
2012-06-23 14:19:46 UTC
Permalink
I find success in combining a few end to end tests for basic flow and
perhaps one key error flow with exhaustive tiny isolated tests.

If I'm not sure where I'm going, then I typically start with end to end
tests, then when they start to duplicate themselves too much, I dive down.

I slowly move end to end tests into a system test suite, then replace them
with isolated tests to check each layer more carefully.

I principally avoid exhaustive end to end tests to show basic correctness;
I save them for system level problems, like response time or scaling.

J. B. Rainsberger :: jbrains.ca
On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpingry-***@public.gmane.org> wrote:

> Thanks Steve, this begins to answer some of the questions I have been
> having lately as well. I just watched your "Fractal TDD" video, and it
> touched on this quite a bit
>
> On a recent project, I had it quite easy in that I did not have any real
> external dependencies to worry about. It was actually a console
> application. As described by GOOS, for every new feature I was wanting to
> add, I would write an end-to-end test that exercised the feature, and then
> I would drive down to where the classes were implementing things all the
> way down (like tortoises). I was not sure about this because at each level
> I felt like I was kind of duplicating tests. Because there were tests at
> each level, I felt like the tests were too coupled with my design.
>
> Then I started thinking that maybe I just needed 2 tests, one on the very
> top, an acceptance test that shows an example of feature, then ones right
> where there were actually changes happening.
>
> So, first question... Is this right? It felt better. You are saying that
> you should just write the full end-to-end acceptance test on a few of the
> main paths, like the main "Happy Cases", or the ones that provide the most
> risk, then for the others drive just a little deeper so you don't have to
> take everything through the brittle outter layers, past your ports and
> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>
> On another project I was working on, this was much more the case. I would
> actually test through a mocked version of my "port" to test "edge-to-edge".
> I wondered at that time how much the interface of my port should resemble
> the external resource I was adapting. From what I have seen, you are
> saying it should not, it should more reflect the needs of your domain. So,
> for my second question...
>
> As I worked this way, I would quite often find code that WANTED to live in
> this boundary between my domain classes and my adapters, or in the adapter
> itself. This adapter area became dangerous ground. I had a hard time
> budding off classes in between in a tested way. I actually struggle with
> budding off in-between classes pretty badly in general. All of my tests
> seem to be so tied to the interfaces of their collaborators that to bud
> something in-between makes everything break. It is probably a more
> systemic problem now that I think of it. Anyway, it is a real problem in
> particular for this adapter area because as I learn more about the external
> resource I am using and what I am required to do with it, code collects in
> this adapter layer that seems untestable. Perhaps that is where I need to
> be doing "integration tests" that I have been avoiding?
>
> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>
>> That sounds reasonable.
>>
>> The trick is not to do too much through the top-level tests, otherwise
>> they grow out of control. In your case, probably there would be a smaller
>> number of top-level tests that show the main paths through the system: type
>> in a name, show what happens when a result is or is not found. Then I'd
>> probably have more detailed testing lower down to exercise the components
>> that did the phonetic look up, and the translation to coordinates.
>>
>> S
>>
>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>> > I work with Geographic Information Systems, and most of all system I
>> > develop have a strong link with user interface.
>> >
>> > The doubt is: when i said that my system must show a geographic
>> location
>> > through a phonetic search.
>> >
>> > I will start my develop, creating a test that valid if a user search,
>> show
>> > the correctly geolocation on the map.
>> >
>> > Through the creation of this test, i will create a web page with a test
>> > box, then the search will fail, then i implement a test for phonetic
>> search
>> > until this not fail anymore, after that i will creating my application
>> step
>> > by step, ever failing some functionality on user interface.
>> >
>> > Is this the right concept?
>> >
>> > About the rules I have mentioned, i believe that most of then are
>> around
>> > the best pratices to develop Object Oriented SW, ever considering the
>> > cohesion and coupling.
>> >
>> > Very tks for the answer...
>> >
>> >
>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>> >>
>> >>> I'm reading the book, and the main doubt is, how to start a software
>> >> development?
>> >>>
>> >>> I would to know if I must start to write a test using selennium
>> >> framework, because the systems that I developed ever satisfy an users
>> that
>> >> need to put some information through some text boxs.
>> >>
>> >> that might be a good start. What would you like to show that the
>> system
>> >> can do?
>> >>
>> >>> Is this a main rule that all of us must to follow?
>> >>
>> >> there are few rules that everyone must follow.
>> >
>>
>> Steve Freeman
>>
>> Winner of the Agile Alliance Gordon Pask award 2006
>> Book: http://www.growing-object-**oriented-software.com<http://www.growing-object-oriented-software.com>
>>
>> +44 797 179 4105
>> Twitter: @sf105
>> Higher Order Logic Limited
>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>> Company registered in England & Wales. Number 7522677
>>
>>
>>
>>
Daniel Wellman
2012-06-23 17:32:04 UTC
Permalink
Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
about acceptance tests vs. integration tests. Does what I describe below
align with your philosophies, or have I missed something?

I kept thinking that the style described in the book was to prefer
acceptance tests to *always* be end-to-end tests. The book says this (p.
10): "Our preferred implementation of the "role" of acceptance testing is
to write end-to-end tests which, as we just noted, should be as end-to-end
as possible, our bias often leads us to use these terms interchangeably
although, in some cases, acceptance tests might not be end-to-end."

Then I saw Steve say this:

> The trick is not to do too much through the top-level tests, otherwise
> they grow out of control. In your case, probably there would be a smaller
> number of top-level tests that show the main paths through the system


Which made me start thinking that there was likely a point in the project's
lifespan where the cost of running or writing those acceptance tests as
end-to-end tests became too high. Perhaps it's the length of the tests,
perhaps the team has grown confident enough with their progress that they
know when they need an end-to-end test and when they don't.

Then I saw J.B. say this, which gives me some confirmation that others are
also eventually switching the default style of acceptance tests from
end-to-end to something ... smaller.

I slowly move end to end tests into a system test suite, then replace them
> with isolated tests to check each layer more carefully.
> I principally avoid exhaustive end to end tests to show basic correctness;
> I save them for system level problems, like response time or scaling.


So then I wondered, "What would the acceptance tests look like if they
aren't end-to-end?" One idea I had was, "Use the ports and adapters
architecture and swap out some of the production adapters with
test-specific adapters." For example, replace the production database with
versions that store their data in-memory.

I've spent the last eight years primarily working on web applications, so
when I hear as "end-to-end" as possible, I think of exercising the browser
using a tool like Selenium / WebDriver. Writing a new acceptance test for
every new feature in a long-living and expanding system using Selenium
gives me some pause. Perhaps when the team decides that not all new
acceptance tests need to be fully end-to-end, there might be other ways to
handle that - perhaps replace the UI adapter with something else, say a
headless browser like HtmlUnit or something.

And as I write this, it also occurs to me that if in fact a large webapp
grows over several years, then perhaps this is an indicator to break up
that system into smaller webapps or systems or other units.


Am I off my rocker?

Dan




On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <jbrainsberger-***@public.gmane.org
> wrote:

> I find success in combining a few end to end tests for basic flow and
> perhaps one key error flow with exhaustive tiny isolated tests.
>
> If I'm not sure where I'm going, then I typically start with end to end
> tests, then when they start to duplicate themselves too much, I dive down.
>
> I slowly move end to end tests into a system test suite, then replace them
> with isolated tests to check each layer more carefully.
>
> I principally avoid exhaustive end to end tests to show basic correctness;
> I save them for system level problems, like response time or scaling.
>
> J. B. Rainsberger :: jbrains.ca
> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpingry-***@public.gmane.org> wrote:
>
>> Thanks Steve, this begins to answer some of the questions I have been
>> having lately as well. I just watched your "Fractal TDD" video, and it
>> touched on this quite a bit
>>
>> On a recent project, I had it quite easy in that I did not have any real
>> external dependencies to worry about. It was actually a console
>> application. As described by GOOS, for every new feature I was wanting to
>> add, I would write an end-to-end test that exercised the feature, and then
>> I would drive down to where the classes were implementing things all the
>> way down (like tortoises). I was not sure about this because at each level
>> I felt like I was kind of duplicating tests. Because there were tests at
>> each level, I felt like the tests were too coupled with my design.
>>
>> Then I started thinking that maybe I just needed 2 tests, one on the very
>> top, an acceptance test that shows an example of feature, then ones right
>> where there were actually changes happening.
>>
>> So, first question... Is this right? It felt better. You are saying
>> that you should just write the full end-to-end acceptance test on a few of
>> the main paths, like the main "Happy Cases", or the ones that provide the
>> most risk, then for the others drive just a little deeper so you don't have
>> to take everything through the brittle outter layers, past your ports and
>> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>
>> On another project I was working on, this was much more the case. I
>> would actually test through a mocked version of my "port" to test
>> "edge-to-edge". I wondered at that time how much the interface of my port
>> should resemble the external resource I was adapting. From what I have
>> seen, you are saying it should not, it should more reflect the needs of
>> your domain. So, for my second question...
>>
>> As I worked this way, I would quite often find code that WANTED to live
>> in this boundary between my domain classes and my adapters, or in the
>> adapter itself. This adapter area became dangerous ground. I had a hard
>> time budding off classes in between in a tested way. I actually struggle
>> with budding off in-between classes pretty badly in general. All of my
>> tests seem to be so tied to the interfaces of their collaborators that to
>> bud something in-between makes everything break. It is probably a more
>> systemic problem now that I think of it. Anyway, it is a real problem in
>> particular for this adapter area because as I learn more about the external
>> resource I am using and what I am required to do with it, code collects in
>> this adapter layer that seems untestable. Perhaps that is where I need to
>> be doing "integration tests" that I have been avoiding?
>>
>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>
>>> That sounds reasonable.
>>>
>>> The trick is not to do too much through the top-level tests, otherwise
>>> they grow out of control. In your case, probably there would be a smaller
>>> number of top-level tests that show the main paths through the system: type
>>> in a name, show what happens when a result is or is not found. Then I'd
>>> probably have more detailed testing lower down to exercise the components
>>> that did the phonetic look up, and the translation to coordinates.
>>>
>>> S
>>>
>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>> > I work with Geographic Information Systems, and most of all system I
>>> > develop have a strong link with user interface.
>>> >
>>> > The doubt is: when i said that my system must show a geographic
>>> location
>>> > through a phonetic search.
>>> >
>>> > I will start my develop, creating a test that valid if a user search,
>>> show
>>> > the correctly geolocation on the map.
>>> >
>>> > Through the creation of this test, i will create a web page with a
>>> test
>>> > box, then the search will fail, then i implement a test for phonetic
>>> search
>>> > until this not fail anymore, after that i will creating my application
>>> step
>>> > by step, ever failing some functionality on user interface.
>>> >
>>> > Is this the right concept?
>>> >
>>> > About the rules I have mentioned, i believe that most of then are
>>> around
>>> > the best pratices to develop Object Oriented SW, ever considering the
>>> > cohesion and coupling.
>>> >
>>> > Very tks for the answer...
>>> >
>>> >
>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>> >>
>>> >>> I'm reading the book, and the main doubt is, how to start a software
>>> >> development?
>>> >>>
>>> >>> I would to know if I must start to write a test using selennium
>>> >> framework, because the systems that I developed ever satisfy an users
>>> that
>>> >> need to put some information through some text boxs.
>>> >>
>>> >> that might be a good start. What would you like to show that the
>>> system
>>> >> can do?
>>> >>
>>> >>> Is this a main rule that all of us must to follow?
>>> >>
>>> >> there are few rules that everyone must follow.
>>> >
>>>
>>> Steve Freeman
>>>
>>> Winner of the Agile Alliance Gordon Pask award 2006
>>> Book: http://www.growing-object-**oriented-software.com<http://www.growing-object-oriented-software.com>
>>>
>>> +44 797 179 4105
>>> Twitter: @sf105
>>> Higher Order Logic Limited
>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>> Company registered in England & Wales. Number 7522677
>>>
>>>
>>>
>>>
J. B. Rainsberger
2012-06-23 17:53:28 UTC
Permalink
J. B. Rainsberger :: jbrains.ca
On Jun 23, 2012 1:32 PM, "Daniel Wellman" <etldan-***@public.gmane.org> wrote:
>
> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
about acceptance tests vs. integration tests.

Cheers. I'm glad I could help.

> Then I saw J.B. say this, which gives me some confirmation that others
are also eventually switching the default style of acceptance tests from
end-to-end to something ... smaller.

I have used smaller tests to give customers confidence on the presence of,
say, a very specific policy. I use 1 to 3 flow tests (really "examples"
now) to show that the system uses the policy at the right time and how we
react when it screws up. For the rest, I use what James Shore called
"customer unit tests", which bash the policy object or objects to pieces
with every conceivable check. This makes most customers feel secure that
the policy behaves correctly.

> And as I write this, it also occurs to me that if in fact a large webapp
grows over several years, then perhaps this is an indicator to break up
that system into smaller webapps or systems or other unit

Sounds good to me.

Of course, remember that most of my advice about small tests focuses on
programmer tests, not customer tests.

> Am I off my rocker?

Possibly yes, but probably no.

J. B. Rainsberger :: http://www.jbrains.ca
David Peterson
2012-06-23 18:16:54 UTC
Permalink
I do that too. If you write customer acceptance tests for one specific
behaviour, often you'll find it's implemented by a single class which you
can test directly (especially if you keep bundling up lower-level classes
into classes at a higher-level of abstraction).

Try to make a clean separation of the test cases (the "examples", written
in customer language) from their implementation (how they are actually run
against the system under test). That is, try to write the test case to be
implementation-agnostic, so that you could implement it by calling directly
into your domain model classes, or using Selenium and driving it through
the user interface, without having to alter the test case at all.

David



On 23 June 2012 18:53, J. B. Rainsberger <jbrainsberger-***@public.gmane.org> wrote:

> J. B. Rainsberger :: jbrains.ca
>
> On Jun 23, 2012 1:32 PM, "Daniel Wellman" <etldan-***@public.gmane.org> wrote:
> >
> > Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
> about acceptance tests vs. integration tests.
>
> Cheers. I'm glad I could help.
>
> > Then I saw J.B. say this, which gives me some confirmation that others
> are also eventually switching the default style of acceptance tests from
> end-to-end to something ... smaller.
>
> I have used smaller tests to give customers confidence on the presence of,
> say, a very specific policy. I use 1 to 3 flow tests (really "examples"
> now) to show that the system uses the policy at the right time and how we
> react when it screws up. For the rest, I use what James Shore called
> "customer unit tests", which bash the policy object or objects to pieces
> with every conceivable check. This makes most customers feel secure that
> the policy behaves correctly.
>
> > And as I write this, it also occurs to me that if in fact a large webapp
> grows over several years, then perhaps this is an indicator to break up
> that system into smaller webapps or systems or other unit
>
> Sounds good to me.
>
> Of course, remember that most of my advice about small tests focuses on
> programmer tests, not customer tests.
>
> > Am I off my rocker?
>
> Possibly yes, but probably no.
>
> J. B. Rainsberger :: http://www.jbrains.ca
>
Steve Freeman
2012-06-23 18:25:03 UTC
Permalink
On 23 Jun 2012, at 18:32, Daniel Wellman wrote:
> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
> about acceptance tests vs. integration tests. Does what I describe below
> align with your philosophies, or have I missed something?

glad to be of service...

> Which made me start thinking that there was likely a point in the project's
> lifespan where the cost of running or writing those acceptance tests as
> end-to-end tests became too high. Perhaps it's the length of the tests,
> perhaps the team has grown confident enough with their progress that they
> know when they need an end-to-end test and when they don't.

In retrospect, a better line would have been that the end-to-end tests need ongoing care too. They need to be checked for duplication and no longer being relevant, and possibly reworked as new insights arrive. Plenty of teams appear to have write-only acceptance tests.

There's also some interesting work from Dan Bodart on just making the end-to-end build go fast enough (10 secs for a real system) that the question becomes moot. Some of this will be about making the system under test itself faster.

> So then I wondered, "What would the acceptance tests look like if they
> aren't end-to-end?" One idea I had was, "Use the ports and adapters
> architecture and swap out some of the production adapters with
> test-specific adapters." For example, replace the production database with
> versions that store their data in-memory.

Yes. As long as you remember to back this with thorough integration testing of your database adapter.

> I've spent the last eight years primarily working on web applications, so
> when I hear as "end-to-end" as possible, I think of exercising the browser
> using a tool like Selenium / WebDriver. Writing a new acceptance test for
> every new feature in a long-living and expanding system using Selenium
> gives me some pause. Perhaps when the team decides that not all new
> acceptance tests need to be fully end-to-end, there might be other ways to
> handle that - perhaps replace the UI adapter with something else, say a
> headless browser like HtmlUnit or something.

It's about building up a mix of tests. In the end we only promise a degree of confidence. Plus (very important) having something readable that explains what the system really does.

> And as I write this, it also occurs to me that if in fact a large webapp
> grows over several years, then perhaps this is an indicator to break up
> that system into smaller webapps or systems or other units.

+1

S.
Matteo Vaccari
2012-06-24 15:45:16 UTC
Permalink
>
>
> So then I wondered, "What would the acceptance tests look like if they
> aren't end-to-end?" One idea I had was, "Use the ports and adapters
> architecture and swap out some of the production adapters with
> test-specific adapters." For example, replace the production database with
> versions that store their data in-memory.
>
> I've spent the last eight years primarily working on web applications, so
> when I hear as "end-to-end" as possible, I think of exercising the browser
> using a tool like Selenium / WebDriver. Writing a new acceptance test for
> every new feature in a long-living and expanding system using Selenium
> gives me some pause. Perhaps when the team decides that not all new
> acceptance tests need to be fully end-to-end, there might be other ways to
> handle that - perhaps replace the UI adapter with something else, say a
> headless browser like HtmlUnit or something.
>
>
I find that if you write the acceptance test in a way that's expressive and
abstract enough, you can't tell by looking at the test if it goes through
the GUI or not. For instance, this is an acceptance test I use for
training:

@Test
public void convertsTemperatureFromCelsiusToFahrenheit() throws Exception {
TempConvPage page = TempConvPage.getTempConvPage();
assertEquals("", page.getDisplay());
page.setTemperature("0");
TempConvPage secondPage = page.submit();
assertEquals("0.0 C° = 32.0 F°", secondPage.getDisplay());
}

does this use Selenium, HtmlUnit, or the application service layer? You
don't know. It depends on how the TempConvPage object is implemented.
Another example:

@Test
public void consultantsArePayedOnlyIfTheyWorkAtLeastSevenHours() throws
Exception {
givenConsultant("1111", "John", "Doe");
givenBadgeLog(by("1111"), on("2010/01/03"), from("00:00"), to("06:59"));
givenBadgeLog(by("1111"), on("2010/01/04"), from("00:00"), to("07:00"));

processSlipsFor(2010, 1);
expectOneSlip(
"201001-doe-john.txt",
"1111 | Doe | John\n" +
"Work days: 1 * € 150,00 = € 150,00\n"
);
}

Another one

def test_uncle_bobs_example
# FEAD
start_game
assert_equal "AAAA", current_guess
respond_with "+"
assert_equal "ABBB", current_guess
respond_with "-"
assert_equal "CACC", current_guess
respond_with "-"
assert_equal "DDAD", current_guess
respond_with "++"
assert_equal "DEAE", current_guess
respond_with "-++"
assert_equal "FDAE", current_guess
respond_with "--++"
assert_equal "FEAD", current_guess
respond_with "++++"
assert_victory
end


Basically you build a layer of methods that focus on the business
requirements and hide how the AT is really implemented. Then you can start
with an AT that works through the application service layer and then change
it to go through the UI, or the other way around.

Matteo
Ben Biddington
2012-06-25 21:23:51 UTC
Permalink
> What would the acceptance tests look like if they aren't end-to-end?

I have been considering this also. I often fall into the trap of having
controller-level abstractions influence design, so I wondered about writing
the UI adapter *last*.

Perhaps these are *eventual*-end-to-end tests.

<bb />

On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>
> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
> about acceptance tests vs. integration tests. Does what I describe below
> align with your philosophies, or have I missed something?
>
> I kept thinking that the style described in the book was to prefer
> acceptance tests to *always* be end-to-end tests. The book says this (p.
> 10): "Our preferred implementation of the "role" of acceptance testing is
> to write end-to-end tests which, as we just noted, should be as end-to-end
> as possible, our bias often leads us to use these terms interchangeably
> although, in some cases, acceptance tests might not be end-to-end."
>
> Then I saw Steve say this:
>
>> The trick is not to do too much through the top-level tests, otherwise
>> they grow out of control. In your case, probably there would be a smaller
>> number of top-level tests that show the main paths through the system
>
>
> Which made me start thinking that there was likely a point in the
> project's lifespan where the cost of running or writing those acceptance
> tests as end-to-end tests became too high. Perhaps it's the length of the
> tests, perhaps the team has grown confident enough with their progress that
> they know when they need an end-to-end test and when they don't.
>
> Then I saw J.B. say this, which gives me some confirmation that others are
> also eventually switching the default style of acceptance tests from
> end-to-end to something ... smaller.
>
> I slowly move end to end tests into a system test suite, then replace them
>> with isolated tests to check each layer more carefully.
>> I principally avoid exhaustive end to end tests to show basic
>> correctness; I save them for system level problems, like response time or
>> scaling.
>
>
> So then I wondered, "What would the acceptance tests look like if they
> aren't end-to-end?" One idea I had was, "Use the ports and adapters
> architecture and swap out some of the production adapters with
> test-specific adapters." For example, replace the production database with
> versions that store their data in-memory.
>
> I've spent the last eight years primarily working on web applications, so
> when I hear as "end-to-end" as possible, I think of exercising the browser
> using a tool like Selenium / WebDriver. Writing a new acceptance test for
> every new feature in a long-living and expanding system using Selenium
> gives me some pause. Perhaps when the team decides that not all new
> acceptance tests need to be fully end-to-end, there might be other ways to
> handle that - perhaps replace the UI adapter with something else, say a
> headless browser like HtmlUnit or something.
>
> And as I write this, it also occurs to me that if in fact a large webapp
> grows over several years, then perhaps this is an indicator to break up
> that system into smaller webapps or systems or other units.
>
>
> Am I off my rocker?
>
> Dan
>
>
>
>
> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <
> jbrainsberger-***@public.gmane.org> wrote:
>
>> I find success in combining a few end to end tests for basic flow and
>> perhaps one key error flow with exhaustive tiny isolated tests.
>>
>> If I'm not sure where I'm going, then I typically start with end to end
>> tests, then when they start to duplicate themselves too much, I dive down.
>>
>> I slowly move end to end tests into a system test suite, then replace
>> them with isolated tests to check each layer more carefully.
>>
>> I principally avoid exhaustive end to end tests to show basic
>> correctness; I save them for system level problems, like response time or
>> scaling.
>>
>> J. B. Rainsberger :: jbrains.ca
>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpingry-***@public.gmane.org> wrote:
>>
>>> Thanks Steve, this begins to answer some of the questions I have been
>>> having lately as well. I just watched your "Fractal TDD" video, and it
>>> touched on this quite a bit
>>>
>>> On a recent project, I had it quite easy in that I did not have any real
>>> external dependencies to worry about. It was actually a console
>>> application. As described by GOOS, for every new feature I was wanting to
>>> add, I would write an end-to-end test that exercised the feature, and then
>>> I would drive down to where the classes were implementing things all the
>>> way down (like tortoises). I was not sure about this because at each level
>>> I felt like I was kind of duplicating tests. Because there were tests at
>>> each level, I felt like the tests were too coupled with my design.
>>>
>>> Then I started thinking that maybe I just needed 2 tests, one on the
>>> very top, an acceptance test that shows an example of feature, then ones
>>> right where there were actually changes happening.
>>>
>>> So, first question... Is this right? It felt better. You are saying
>>> that you should just write the full end-to-end acceptance test on a few of
>>> the main paths, like the main "Happy Cases", or the ones that provide the
>>> most risk, then for the others drive just a little deeper so you don't have
>>> to take everything through the brittle outter layers, past your ports and
>>> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>
>>> On another project I was working on, this was much more the case. I
>>> would actually test through a mocked version of my "port" to test
>>> "edge-to-edge". I wondered at that time how much the interface of my port
>>> should resemble the external resource I was adapting. From what I have
>>> seen, you are saying it should not, it should more reflect the needs of
>>> your domain. So, for my second question...
>>>
>>> As I worked this way, I would quite often find code that WANTED to live
>>> in this boundary between my domain classes and my adapters, or in the
>>> adapter itself. This adapter area became dangerous ground. I had a hard
>>> time budding off classes in between in a tested way. I actually struggle
>>> with budding off in-between classes pretty badly in general. All of my
>>> tests seem to be so tied to the interfaces of their collaborators that to
>>> bud something in-between makes everything break. It is probably a more
>>> systemic problem now that I think of it. Anyway, it is a real problem in
>>> particular for this adapter area because as I learn more about the external
>>> resource I am using and what I am required to do with it, code collects in
>>> this adapter layer that seems untestable. Perhaps that is where I need to
>>> be doing "integration tests" that I have been avoiding?
>>>
>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>
>>>> That sounds reasonable.
>>>>
>>>> The trick is not to do too much through the top-level tests, otherwise
>>>> they grow out of control. In your case, probably there would be a smaller
>>>> number of top-level tests that show the main paths through the system: type
>>>> in a name, show what happens when a result is or is not found. Then I'd
>>>> probably have more detailed testing lower down to exercise the components
>>>> that did the phonetic look up, and the translation to coordinates.
>>>>
>>>> S
>>>>
>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>> > I work with Geographic Information Systems, and most of all system I
>>>> > develop have a strong link with user interface.
>>>> >
>>>> > The doubt is: when i said that my system must show a geographic
>>>> location
>>>> > through a phonetic search.
>>>> >
>>>> > I will start my develop, creating a test that valid if a user search,
>>>> show
>>>> > the correctly geolocation on the map.
>>>> >
>>>> > Through the creation of this test, i will create a web page with a
>>>> test
>>>> > box, then the search will fail, then i implement a test for phonetic
>>>> search
>>>> > until this not fail anymore, after that i will creating my
>>>> application step
>>>> > by step, ever failing some functionality on user interface.
>>>> >
>>>> > Is this the right concept?
>>>> >
>>>> > About the rules I have mentioned, i believe that most of then are
>>>> around
>>>> > the best pratices to develop Object Oriented SW, ever considering the
>>>> > cohesion and coupling.
>>>> >
>>>> > Very tks for the answer...
>>>> >
>>>> >
>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>> >>
>>>> >>> I'm reading the book, and the main doubt is, how to start a
>>>> software
>>>> >> development?
>>>> >>>
>>>> >>> I would to know if I must start to write a test using selennium
>>>> >> framework, because the systems that I developed ever satisfy an
>>>> users that
>>>> >> need to put some information through some text boxs.
>>>> >>
>>>> >> that might be a good start. What would you like to show that the
>>>> system
>>>> >> can do?
>>>> >>
>>>> >>> Is this a main rule that all of us must to follow?
>>>> >>
>>>> >> there are few rules that everyone must follow.
>>>> >
>>>>
>>>> Steve Freeman
>>>>
>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>> Book: http://www.growing-object-**oriented-software.com<http://www.growing-object-oriented-software.com>
>>>>
>>>> +44 797 179 4105
>>>> Twitter: @sf105
>>>> Higher Order Logic Limited
>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>> Company registered in England & Wales. Number 7522677
>>>>
>>>>
>>>>
>>>>
>
Daniel Wellman
2014-08-10 23:38:12 UTC
Permalink
This was a thread from a few years ago which turned into a discussion of
acceptance tests, end-to-end tests, system tests, and where people might
test what. I thought there were a lot of helpful (to me, at least) stories
of how people migrate acceptance tests from end-to-end to directly testing
the domain objects, and migrating end-to-end acceptance tests into a suite
of system tests.

Now, a few years later, I'm wondering if anyone has any new insights or
things they've tried that is working (or not) for them?

I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
experiment to build acceptance tests (in Cucumber) that can be run either
end-to-end with a browser or using lower-level objects by changing an
environment variable. Here's a link to the talk:
https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus


One thing I've learned is that when builds get long enough, it's now easy
enough to set up the build to be run in parallel. I've seen services that
do this, and other implementations done by hand to get the same effect. My
take is that this can get you some substantial gains quickly in terms of
cutting total build time, but these end-to-end tests often (but not always)
tend to flicker the most (as compared to unit tests or some integration
tests). Thinking more about what these tests are actually covering and
what level of test is appropriate and rewriting or refactoring seems to be
working well from what I've tried so far.

Anyone else?

Cheers,
Dan


On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>
> > What would the acceptance tests look like if they aren't end-to-end?
>
> I have been considering this also. I often fall into the trap of having
> controller-level abstractions influence design, so I wondered about writing
> the UI adapter *last*.
>
> Perhaps these are *eventual*-end-to-end tests.
>
> <bb />
>
> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>
>> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
>> about acceptance tests vs. integration tests. Does what I describe below
>> align with your philosophies, or have I missed something?
>>
>> I kept thinking that the style described in the book was to prefer
>> acceptance tests to *always* be end-to-end tests. The book says this (p.
>> 10): "Our preferred implementation of the "role" of acceptance testing is
>> to write end-to-end tests which, as we just noted, should be as end-to-end
>> as possible, our bias often leads us to use these terms interchangeably
>> although, in some cases, acceptance tests might not be end-to-end."
>>
>> Then I saw Steve say this:
>>
>>> The trick is not to do too much through the top-level tests, otherwise
>>> they grow out of control. In your case, probably there would be a smaller
>>> number of top-level tests that show the main paths through the system
>>
>>
>> Which made me start thinking that there was likely a point in the
>> project's lifespan where the cost of running or writing those acceptance
>> tests as end-to-end tests became too high. Perhaps it's the length of the
>> tests, perhaps the team has grown confident enough with their progress that
>> they know when they need an end-to-end test and when they don't.
>>
>> Then I saw J.B. say this, which gives me some confirmation that others
>> are also eventually switching the default style of acceptance tests from
>> end-to-end to something ... smaller.
>>
>> I slowly move end to end tests into a system test suite, then replace
>>> them with isolated tests to check each layer more carefully.
>>> I principally avoid exhaustive end to end tests to show basic
>>> correctness; I save them for system level problems, like response time or
>>> scaling.
>>
>>
>> So then I wondered, "What would the acceptance tests look like if they
>> aren't end-to-end?" One idea I had was, "Use the ports and adapters
>> architecture and swap out some of the production adapters with
>> test-specific adapters." For example, replace the production database with
>> versions that store their data in-memory.
>>
>> I've spent the last eight years primarily working on web applications, so
>> when I hear as "end-to-end" as possible, I think of exercising the browser
>> using a tool like Selenium / WebDriver. Writing a new acceptance test for
>> every new feature in a long-living and expanding system using Selenium
>> gives me some pause. Perhaps when the team decides that not all new
>> acceptance tests need to be fully end-to-end, there might be other ways to
>> handle that - perhaps replace the UI adapter with something else, say a
>> headless browser like HtmlUnit or something.
>>
>> And as I write this, it also occurs to me that if in fact a large webapp
>> grows over several years, then perhaps this is an indicator to break up
>> that system into smaller webapps or systems or other units.
>>
>>
>> Am I off my rocker?
>>
>> Dan
>>
>>
>>
>>
>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <jbrain...-***@public.gmane.org
>> <javascript:>> wrote:
>>
>>> I find success in combining a few end to end tests for basic flow and
>>> perhaps one key error flow with exhaustive tiny isolated tests.
>>>
>>> If I'm not sure where I'm going, then I typically start with end to end
>>> tests, then when they start to duplicate themselves too much, I dive down.
>>>
>>> I slowly move end to end tests into a system test suite, then replace
>>> them with isolated tests to check each layer more carefully.
>>>
>>> I principally avoid exhaustive end to end tests to show basic
>>> correctness; I save them for system level problems, like response time or
>>> scaling.
>>>
>>> J. B. Rainsberger :: jbrains.ca
>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org <javascript:>>
>>> wrote:
>>>
>>>> Thanks Steve, this begins to answer some of the questions I have been
>>>> having lately as well. I just watched your "Fractal TDD" video, and it
>>>> touched on this quite a bit
>>>>
>>>> On a recent project, I had it quite easy in that I did not have any
>>>> real external dependencies to worry about. It was actually a console
>>>> application. As described by GOOS, for every new feature I was wanting to
>>>> add, I would write an end-to-end test that exercised the feature, and then
>>>> I would drive down to where the classes were implementing things all the
>>>> way down (like tortoises). I was not sure about this because at each level
>>>> I felt like I was kind of duplicating tests. Because there were tests at
>>>> each level, I felt like the tests were too coupled with my design.
>>>>
>>>> Then I started thinking that maybe I just needed 2 tests, one on the
>>>> very top, an acceptance test that shows an example of feature, then ones
>>>> right where there were actually changes happening.
>>>>
>>>> So, first question... Is this right? It felt better. You are saying
>>>> that you should just write the full end-to-end acceptance test on a few of
>>>> the main paths, like the main "Happy Cases", or the ones that provide the
>>>> most risk, then for the others drive just a little deeper so you don't have
>>>> to take everything through the brittle outter layers, past your ports and
>>>> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>
>>>> On another project I was working on, this was much more the case. I
>>>> would actually test through a mocked version of my "port" to test
>>>> "edge-to-edge". I wondered at that time how much the interface of my port
>>>> should resemble the external resource I was adapting. From what I have
>>>> seen, you are saying it should not, it should more reflect the needs of
>>>> your domain. So, for my second question...
>>>>
>>>> As I worked this way, I would quite often find code that WANTED to live
>>>> in this boundary between my domain classes and my adapters, or in the
>>>> adapter itself. This adapter area became dangerous ground. I had a hard
>>>> time budding off classes in between in a tested way. I actually struggle
>>>> with budding off in-between classes pretty badly in general. All of my
>>>> tests seem to be so tied to the interfaces of their collaborators that to
>>>> bud something in-between makes everything break. It is probably a more
>>>> systemic problem now that I think of it. Anyway, it is a real problem in
>>>> particular for this adapter area because as I learn more about the external
>>>> resource I am using and what I am required to do with it, code collects in
>>>> this adapter layer that seems untestable. Perhaps that is where I need to
>>>> be doing "integration tests" that I have been avoiding?
>>>>
>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>
>>>>> That sounds reasonable.
>>>>>
>>>>> The trick is not to do too much through the top-level tests, otherwise
>>>>> they grow out of control. In your case, probably there would be a smaller
>>>>> number of top-level tests that show the main paths through the system: type
>>>>> in a name, show what happens when a result is or is not found. Then I'd
>>>>> probably have more detailed testing lower down to exercise the components
>>>>> that did the phonetic look up, and the translation to coordinates.
>>>>>
>>>>> S
>>>>>
>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>> > I work with Geographic Information Systems, and most of all system I
>>>>> > develop have a strong link with user interface.
>>>>> >
>>>>> > The doubt is: when i said that my system must show a geographic
>>>>> location
>>>>> > through a phonetic search.
>>>>> >
>>>>> > I will start my develop, creating a test that valid if a user
>>>>> search, show
>>>>> > the correctly geolocation on the map.
>>>>> >
>>>>> > Through the creation of this test, i will create a web page with a
>>>>> test
>>>>> > box, then the search will fail, then i implement a test for phonetic
>>>>> search
>>>>> > until this not fail anymore, after that i will creating my
>>>>> application step
>>>>> > by step, ever failing some functionality on user interface.
>>>>> >
>>>>> > Is this the right concept?
>>>>> >
>>>>> > About the rules I have mentioned, i believe that most of then are
>>>>> around
>>>>> > the best pratices to develop Object Oriented SW, ever considering
>>>>> the
>>>>> > cohesion and coupling.
>>>>> >
>>>>> > Very tks for the answer...
>>>>> >
>>>>> >
>>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>> >>
>>>>> >>> I'm reading the book, and the main doubt is, how to start a
>>>>> software
>>>>> >> development?
>>>>> >>>
>>>>> >>> I would to know if I must start to write a test using selennium
>>>>> >> framework, because the systems that I developed ever satisfy an
>>>>> users that
>>>>> >> need to put some information through some text boxs.
>>>>> >>
>>>>> >> that might be a good start. What would you like to show that the
>>>>> system
>>>>> >> can do?
>>>>> >>
>>>>> >>> Is this a main rule that all of us must to follow?
>>>>> >>
>>>>> >> there are few rules that everyone must follow.
>>>>> >
>>>>>
>>>>> Steve Freeman
>>>>>
>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>
>>>>> +44 797 179 4105
>>>>> Twitter: @sf105
>>>>> Higher Order Logic Limited
>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>> Company registered in England & Wales. Number 7522677
>>>>>
>>>>>
>>>>>
>>>>>
>>

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Nat Pryce
2014-08-11 08:58:02 UTC
Permalink
I agree.

Parallelisation can cut down the time it takes to run system tests but it
doesn't cut down the time it takes to work out why failing tests have
failed.

The more of the system that is involved in a test, the harder it is to
diagnose failures and the more causes of failure are involved. As you say,
tests are more likely to flicker. In my experience, system tests are also
more likely to be disrupted by environmental issues -- network contention,
database servers going down, etc. So the more system tests you run, the
more time you're going to spend diagnosing failures. Also, you're going to
see those failures later, because you have to package, deploy and test the
app before seeing them.

I've been trying a different approach for acceptance testing recently --
using Llewelyn Falco's "Approval Testing" approach to generate user-facing
documentation in Markdown. Instead of using different glue code to run
Spec-by-example documents at against different levels of the system, I can
write a different test to generate an identical document. I find approval
testing to be a much simpler approach than Spec-by-Example. Approval
testing allows tests to generate documents that could not be parsed for
test input in a Spec-by-Example workflow, such as visualisations for
calculations that involve a lot of data.

--Nat


On 11 August 2014 00:38, Daniel Wellman <etldan-***@public.gmane.org> wrote:

> This was a thread from a few years ago which turned into a discussion of
> acceptance tests, end-to-end tests, system tests, and where people might
> test what. I thought there were a lot of helpful (to me, at least) stories
> of how people migrate acceptance tests from end-to-end to directly testing
> the domain objects, and migrating end-to-end acceptance tests into a suite
> of system tests.
>
> Now, a few years later, I'm wondering if anyone has any new insights or
> things they've tried that is working (or not) for them?
>
> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
> experiment to build acceptance tests (in Cucumber) that can be run either
> end-to-end with a browser or using lower-level objects by changing an
> environment variable. Here's a link to the talk:
> https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
>
>
> One thing I've learned is that when builds get long enough, it's now easy
> enough to set up the build to be run in parallel. I've seen services that
> do this, and other implementations done by hand to get the same effect. My
> take is that this can get you some substantial gains quickly in terms of
> cutting total build time, but these end-to-end tests often (but not always)
> tend to flicker the most (as compared to unit tests or some integration
> tests). Thinking more about what these tests are actually covering and
> what level of test is appropriate and rewriting or refactoring seems to be
> working well from what I've tried so far.
>
> Anyone else?
>
> Cheers,
> Dan
>
>
> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>>
>> > What would the acceptance tests look like if they aren't end-to-end?
>>
>> I have been considering this also. I often fall into the trap of having
>> controller-level abstractions influence design, so I wondered about writing
>> the UI adapter *last*.
>>
>> Perhaps these are *eventual*-end-to-end tests.
>>
>> <bb />
>>
>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>>
>>> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
>>> about acceptance tests vs. integration tests. Does what I describe below
>>> align with your philosophies, or have I missed something?
>>>
>>> I kept thinking that the style described in the book was to prefer
>>> acceptance tests to *always* be end-to-end tests. The book says this (p.
>>> 10): "Our preferred implementation of the "role" of acceptance testing is
>>> to write end-to-end tests which, as we just noted, should be as end-to-end
>>> as possible, our bias often leads us to use these terms interchangeably
>>> although, in some cases, acceptance tests might not be end-to-end."
>>>
>>> Then I saw Steve say this:
>>>
>>>> The trick is not to do too much through the top-level tests, otherwise
>>>> they grow out of control. In your case, probably there would be a smaller
>>>> number of top-level tests that show the main paths through the system
>>>
>>>
>>> Which made me start thinking that there was likely a point in the
>>> project's lifespan where the cost of running or writing those acceptance
>>> tests as end-to-end tests became too high. Perhaps it's the length of the
>>> tests, perhaps the team has grown confident enough with their progress that
>>> they know when they need an end-to-end test and when they don't.
>>>
>>> Then I saw J.B. say this, which gives me some confirmation that others
>>> are also eventually switching the default style of acceptance tests from
>>> end-to-end to something ... smaller.
>>>
>>> I slowly move end to end tests into a system test suite, then replace
>>>> them with isolated tests to check each layer more carefully.
>>>> I principally avoid exhaustive end to end tests to show basic
>>>> correctness; I save them for system level problems, like response time or
>>>> scaling.
>>>
>>>
>>> So then I wondered, "What would the acceptance tests look like if they
>>> aren't end-to-end?" One idea I had was, "Use the ports and adapters
>>> architecture and swap out some of the production adapters with
>>> test-specific adapters." For example, replace the production database with
>>> versions that store their data in-memory.
>>>
>>> I've spent the last eight years primarily working on web applications,
>>> so when I hear as "end-to-end" as possible, I think of exercising the
>>> browser using a tool like Selenium / WebDriver. Writing a new acceptance
>>> test for every new feature in a long-living and expanding system using
>>> Selenium gives me some pause. Perhaps when the team decides that not all
>>> new acceptance tests need to be fully end-to-end, there might be other ways
>>> to handle that - perhaps replace the UI adapter with something else, say a
>>> headless browser like HtmlUnit or something.
>>>
>>> And as I write this, it also occurs to me that if in fact a large webapp
>>> grows over several years, then perhaps this is an indicator to break up
>>> that system into smaller webapps or systems or other units.
>>>
>>>
>>> Am I off my rocker?
>>>
>>> Dan
>>>
>>>
>>>
>>>
>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <jbrain...-***@public.gmane.org
>>> > wrote:
>>>
>>>> I find success in combining a few end to end tests for basic flow and
>>>> perhaps one key error flow with exhaustive tiny isolated tests.
>>>>
>>>> If I'm not sure where I'm going, then I typically start with end to end
>>>> tests, then when they start to duplicate themselves too much, I dive down.
>>>>
>>>> I slowly move end to end tests into a system test suite, then replace
>>>> them with isolated tests to check each layer more carefully.
>>>>
>>>> I principally avoid exhaustive end to end tests to show basic
>>>> correctness; I save them for system level problems, like response time or
>>>> scaling.
>>>>
>>>> J. B. Rainsberger :: jbrains.ca
>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
>>>>
>>>>> Thanks Steve, this begins to answer some of the questions I have been
>>>>> having lately as well. I just watched your "Fractal TDD" video, and it
>>>>> touched on this quite a bit
>>>>>
>>>>> On a recent project, I had it quite easy in that I did not have any
>>>>> real external dependencies to worry about. It was actually a console
>>>>> application. As described by GOOS, for every new feature I was wanting to
>>>>> add, I would write an end-to-end test that exercised the feature, and then
>>>>> I would drive down to where the classes were implementing things all the
>>>>> way down (like tortoises). I was not sure about this because at each level
>>>>> I felt like I was kind of duplicating tests. Because there were tests at
>>>>> each level, I felt like the tests were too coupled with my design.
>>>>>
>>>>> Then I started thinking that maybe I just needed 2 tests, one on the
>>>>> very top, an acceptance test that shows an example of feature, then ones
>>>>> right where there were actually changes happening.
>>>>>
>>>>> So, first question... Is this right? It felt better. You are saying
>>>>> that you should just write the full end-to-end acceptance test on a few of
>>>>> the main paths, like the main "Happy Cases", or the ones that provide the
>>>>> most risk, then for the others drive just a little deeper so you don't have
>>>>> to take everything through the brittle outter layers, past your ports and
>>>>> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>>
>>>>> On another project I was working on, this was much more the case. I
>>>>> would actually test through a mocked version of my "port" to test
>>>>> "edge-to-edge". I wondered at that time how much the interface of my port
>>>>> should resemble the external resource I was adapting. From what I have
>>>>> seen, you are saying it should not, it should more reflect the needs of
>>>>> your domain. So, for my second question...
>>>>>
>>>>> As I worked this way, I would quite often find code that WANTED to
>>>>> live in this boundary between my domain classes and my adapters, or in the
>>>>> adapter itself. This adapter area became dangerous ground. I had a hard
>>>>> time budding off classes in between in a tested way. I actually struggle
>>>>> with budding off in-between classes pretty badly in general. All of my
>>>>> tests seem to be so tied to the interfaces of their collaborators that to
>>>>> bud something in-between makes everything break. It is probably a more
>>>>> systemic problem now that I think of it. Anyway, it is a real problem in
>>>>> particular for this adapter area because as I learn more about the external
>>>>> resource I am using and what I am required to do with it, code collects in
>>>>> this adapter layer that seems untestable. Perhaps that is where I need to
>>>>> be doing "integration tests" that I have been avoiding?
>>>>>
>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>>
>>>>>> That sounds reasonable.
>>>>>>
>>>>>> The trick is not to do too much through the top-level tests,
>>>>>> otherwise they grow out of control. In your case, probably there would be a
>>>>>> smaller number of top-level tests that show the main paths through the
>>>>>> system: type in a name, show what happens when a result is or is not found.
>>>>>> Then I'd probably have more detailed testing lower down to exercise the
>>>>>> components that did the phonetic look up, and the translation to
>>>>>> coordinates.
>>>>>>
>>>>>> S
>>>>>>
>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>>> > I work with Geographic Information Systems, and most of all system
>>>>>> I
>>>>>> > develop have a strong link with user interface.
>>>>>> >
>>>>>> > The doubt is: when i said that my system must show a geographic
>>>>>> location
>>>>>> > through a phonetic search.
>>>>>> >
>>>>>> > I will start my develop, creating a test that valid if a user
>>>>>> search, show
>>>>>> > the correctly geolocation on the map.
>>>>>> >
>>>>>> > Through the creation of this test, i will create a web page with a
>>>>>> test
>>>>>> > box, then the search will fail, then i implement a test for
>>>>>> phonetic search
>>>>>> > until this not fail anymore, after that i will creating my
>>>>>> application step
>>>>>> > by step, ever failing some functionality on user interface.
>>>>>> >
>>>>>> > Is this the right concept?
>>>>>> >
>>>>>> > About the rules I have mentioned, i believe that most of then are
>>>>>> around
>>>>>> > the best pratices to develop Object Oriented SW, ever considering
>>>>>> the
>>>>>> > cohesion and coupling.
>>>>>> >
>>>>>> > Very tks for the answer...
>>>>>> >
>>>>>> >
>>>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>>> >>
>>>>>> >>> I'm reading the book, and the main doubt is, how to start a
>>>>>> software
>>>>>> >> development?
>>>>>> >>>
>>>>>> >>> I would to know if I must start to write a test using selennium
>>>>>> >> framework, because the systems that I developed ever satisfy an
>>>>>> users that
>>>>>> >> need to put some information through some text boxs.
>>>>>> >>
>>>>>> >> that might be a good start. What would you like to show that the
>>>>>> system
>>>>>> >> can do?
>>>>>> >>
>>>>>> >>> Is this a main rule that all of us must to follow?
>>>>>> >>
>>>>>> >> there are few rules that everyone must follow.
>>>>>> >
>>>>>>
>>>>>> Steve Freeman
>>>>>>
>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>>
>>>>>> +44 797 179 4105
>>>>>> Twitter: @sf105
>>>>>> Higher Order Logic Limited
>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>>> Company registered in England & Wales. Number 7522677
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>



--
http://www.natpryce.com

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Steve Freeman
2014-08-11 13:49:33 UTC
Permalink
And, of course, there's nothing to stop one pushing the approval (or a first) cut forward before the code exists...

Steve Freeman
http://www.higherorderlogic.com

Written on a phone, so please allow for typos and short content.

On 11 Aug 2014, at 09:58, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:

> I agree.
>
> Parallelisation can cut down the time it takes to run system tests but it doesn't cut down the time it takes to work out why failing tests have failed.
>
> The more of the system that is involved in a test, the harder it is to diagnose failures and the more causes of failure are involved. As you say, tests are more likely to flicker. In my experience, system tests are also more likely to be disrupted by environmental issues -- network contention, database servers going down, etc. So the more system tests you run, the more time you're going to spend diagnosing failures. Also, you're going to see those failures later, because you have to package, deploy and test the app before seeing them.
>
> I've been trying a different approach for acceptance testing recently -- using Llewelyn Falco's "Approval Testing" approach to generate user-facing documentation in Markdown. Instead of using different glue code to run Spec-by-example documents at against different levels of the system, I can write a different test to generate an identical document. I find approval testing to be a much simpler approach than Spec-by-Example. Approval testing allows tests to generate documents that could not be parsed for test input in a Spec-by-Example workflow, such as visualisations for calculations that involve a lot of data.
>
> --Nat
>
>
> On 11 August 2014 00:38, Daniel Wellman <etldan-***@public.gmane.org> wrote:
>> This was a thread from a few years ago which turned into a discussion of acceptance tests, end-to-end tests, system tests, and where people might test what. I thought there were a lot of helpful (to me, at least) stories of how people migrate acceptance tests from end-to-end to directly testing the domain objects, and migrating end-to-end acceptance tests into a suite of system tests.
>>
>> Now, a few years later, I'm wondering if anyone has any new insights or things they've tried that is working (or not) for them?
>>
>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an experiment to build acceptance tests (in Cucumber) that can be run either end-to-end with a browser or using lower-level objects by changing an environment variable. Here's a link to the talk: https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
>>
>>
>> One thing I've learned is that when builds get long enough, it's now easy enough to set up the build to be run in parallel. I've seen services that do this, and other implementations done by hand to get the same effect. My take is that this can get you some substantial gains quickly in terms of cutting total build time, but these end-to-end tests often (but not always) tend to flicker the most (as compared to unit tests or some integration tests). Thinking more about what these tests are actually covering and what level of test is appropriate and rewriting or refactoring seems to be working well from what I've tried so far.
>>
>> Anyone else?
>>
>> Cheers,
>> Dan
>>
>>
>> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>>>
>>> > What would the acceptance tests look like if they aren't end-to-end?
>>>
>>> I have been considering this also. I often fall into the trap of having controller-level abstractions influence design, so I wondered about writing the UI adapter last.
>>>
>>> Perhaps these are eventual-end-to-end tests.
>>>
>>> <bb />
>>>
>>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>>>
>>>> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment about acceptance tests vs. integration tests. Does what I describe below align with your philosophies, or have I missed something?
>>>>
>>>> I kept thinking that the style described in the book was to prefer acceptance tests to *always* be end-to-end tests. The book says this (p. 10): "Our preferred implementation of the "role" of acceptance testing is to write end-to-end tests which, as we just noted, should be as end-to-end as possible, our bias often leads us to use these terms interchangeably although, in some cases, acceptance tests might not be end-to-end."
>>>>
>>>> Then I saw Steve say this:
>>>>> The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system
>>>>
>>>> Which made me start thinking that there was likely a point in the project's lifespan where the cost of running or writing those acceptance tests as end-to-end tests became too high. Perhaps it's the length of the tests, perhaps the team has grown confident enough with their progress that they know when they need an end-to-end test and when they don't.
>>>>
>>>> Then I saw J.B. say this, which gives me some confirmation that others are also eventually switching the default style of acceptance tests from end-to-end to something ... smaller.
>>>>
>>>>> I slowly move end to end tests into a system test suite, then replace them with isolated tests to check each layer more carefully.
>>>>> I principally avoid exhaustive end to end tests to show basic correctness; I save them for system level problems, like response time or scaling.
>>>>
>>>>
>>>> So then I wondered, "What would the acceptance tests look like if they aren't end-to-end?" One idea I had was, "Use the ports and adapters architecture and swap out some of the production adapters with test-specific adapters." For example, replace the production database with versions that store their data in-memory.
>>>>
>>>> I've spent the last eight years primarily working on web applications, so when I hear as "end-to-end" as possible, I think of exercising the browser using a tool like Selenium / WebDriver. Writing a new acceptance test for every new feature in a long-living and expanding system using Selenium gives me some pause. Perhaps when the team decides that not all new acceptance tests need to be fully end-to-end, there might be other ways to handle that - perhaps replace the UI adapter with something else, say a headless browser like HtmlUnit or something.
>>>>
>>>> And as I write this, it also occurs to me that if in fact a large webapp grows over several years, then perhaps this is an indicator to break up that system into smaller webapps or systems or other units.
>>>>
>>>>
>>>> Am I off my rocker?
>>>>
>>>> Dan
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <***@gmail.com> wrote:
>>>>> I find success in combining a few end to end tests for basic flow and perhaps one key error flow with exhaustive tiny isolated tests.
>>>>>
>>>>> If I'm not sure where I'm going, then I typically start with end to end tests, then when they start to duplicate themselves too much, I dive down.
>>>>>
>>>>> I slowly move end to end tests into a system test suite, then replace them with isolated tests to check each layer more carefully.
>>>>>
>>>>> I principally avoid exhaustive end to end tests to show basic correctness; I save them for system level problems, like response time or scaling.
>>>>>
>>>>> J. B. Rainsberger :: jbrains.ca
>>>>>
>>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
>>>>>> Thanks Steve, this begins to answer some of the questions I have been having lately as well. I just watched your "Fractal TDD" video, and it touched on this quite a bit
>>>>>>
>>>>>> On a recent project, I had it quite easy in that I did not have any real external dependencies to worry about. It was actually a console application. As described by GOOS, for every new feature I was wanting to add, I would write an end-to-end test that exercised the feature, and then I would drive down to where the classes were implementing things all the way down (like tortoises). I was not sure about this because at each level I felt like I was kind of duplicating tests. Because there were tests at each level, I felt like the tests were too coupled with my design.
>>>>>>
>>>>>> Then I started thinking that maybe I just needed 2 tests, one on the very top, an acceptance test that shows an example of feature, then ones right where there were actually changes happening.
>>>>>>
>>>>>> So, first question... Is this right? It felt better. You are saying that you should just write the full end-to-end acceptance test on a few of the main paths, like the main "Happy Cases", or the ones that provide the most risk, then for the others drive just a little deeper so you don't have to take everything through the brittle outter layers, past your ports and adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>>>
>>>>>> On another project I was working on, this was much more the case. I would actually test through a mocked version of my "port" to test "edge-to-edge". I wondered at that time how much the interface of my port should resemble the external resource I was adapting. From what I have seen, you are saying it should not, it should more reflect the needs of your domain. So, for my second question...
>>>>>>
>>>>>> As I worked this way, I would quite often find code that WANTED to live in this boundary between my domain classes and my adapters, or in the adapter itself. This adapter area became dangerous ground. I had a hard time budding off classes in between in a tested way. I actually struggle with budding off in-between classes pretty badly in general. All of my tests seem to be so tied to the interfaces of their collaborators that to bud something in-between makes everything break. It is probably a more systemic problem now that I think of it. Anyway, it is a real problem in particular for this adapter area because as I learn more about the external resource I am using and what I am required to do with it, code collects in this adapter layer that seems untestable. Perhaps that is where I need to be doing "integration tests" that I have been avoiding?
>>>>>>
>>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>>>
>>>>>>> That sounds reasonable.
>>>>>>>
>>>>>>> The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system: type in a name, show what happens when a result is or is not found. Then I'd probably have more detailed testing lower down to exercise the components that did the phonetic look up, and the translation to coordinates.
>>>>>>>
>>>>>>> S
>>>>>>>
>>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>>>> > I work with Geographic Information Systems, and most of all system I
>>>>>>> > develop have a strong link with user interface.
>>>>>>> >
>>>>>>> > The doubt is: when i said that my system must show a geographic location
>>>>>>> > through a phonetic search.
>>>>>>> >
>>>>>>> > I will start my develop, creating a test that valid if a user search, show
>>>>>>> > the correctly geolocation on the map.
>>>>>>> >
>>>>>>> > Through the creation of this test, i will create a web page with a test
>>>>>>> > box, then the search will fail, then i implement a test for phonetic search
>>>>>>> > until this not fail anymore, after that i will creating my application step
>>>>>>> > by step, ever failing some functionality on user interface.
>>>>>>> >
>>>>>>> > Is this the right concept?
>>>>>>> >
>>>>>>> > About the rules I have mentioned, i believe that most of then are around
>>>>>>> > the best pratices to develop Object Oriented SW, ever considering the
>>>>>>> > cohesion and coupling.
>>>>>>> >
>>>>>>> > Very tks for the answer...
>>>>>>> >
>>>>>>> >
>>>>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>>>> >>
>>>>>>> >>> I'm reading the book, and the main doubt is, how to start a software
>>>>>>> >> development?
>>>>>>> >>>
>>>>>>> >>> I would to know if I must start to write a test using selennium
>>>>>>> >> framework, because the systems that I developed ever satisfy an users that
>>>>>>> >> need to put some information through some text boxs.
>>>>>>> >>
>>>>>>> >> that might be a good start. What would you like to show that the system
>>>>>>> >> can do?
>>>>>>> >>
>>>>>>> >>> Is this a main rule that all of us must to follow?
>>>>>>> >>
>>>>>>> >> there are few rules that everyone must follow.
>>>>>>> >
>>>>>>>
>>>>>>> Steve Freeman
>>>>>>>
>>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>>>
>>>>>>> +44 797 179 4105
>>>>>>> Twitter: @sf105
>>>>>>> Higher Order Logic Limited
>>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>>>> Company registered in England & Wales. Number 7522677
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> http://www.natpryce.com
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Nat Pryce
2014-08-11 15:18:18 UTC
Permalink
Yes, if using an easily editable format like Markdown you can write the
approved documentation before coding, and so get the same ol'
spec-by-example workflow.

--Nat


On 11 August 2014 14:49, Steve Freeman <steve-/6t1Ty3A97Vaa/***@public.gmane.org> wrote:

> And, of course, there's nothing to stop one pushing the approval (or a
> first) cut forward before the code exists...
>
> Steve Freeman
> http://www.higherorderlogic.com
>
> Written on a phone, so please allow for typos and short content.
>
> On 11 Aug 2014, at 09:58, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:
>
> I agree.
>
> Parallelisation can cut down the time it takes to run system tests but it
> doesn't cut down the time it takes to work out why failing tests have
> failed.
>
> The more of the system that is involved in a test, the harder it is to
> diagnose failures and the more causes of failure are involved. As you say,
> tests are more likely to flicker. In my experience, system tests are also
> more likely to be disrupted by environmental issues -- network contention,
> database servers going down, etc. So the more system tests you run, the
> more time you're going to spend diagnosing failures. Also, you're going to
> see those failures later, because you have to package, deploy and test the
> app before seeing them.
>
> I've been trying a different approach for acceptance testing recently --
> using Llewelyn Falco's "Approval Testing" approach to generate user-facing
> documentation in Markdown. Instead of using different glue code to run
> Spec-by-example documents at against different levels of the system, I can
> write a different test to generate an identical document. I find approval
> testing to be a much simpler approach than Spec-by-Example. Approval
> testing allows tests to generate documents that could not be parsed for
> test input in a Spec-by-Example workflow, such as visualisations for
> calculations that involve a lot of data.
>
> --Nat
>
>
> On 11 August 2014 00:38, Daniel Wellman <etldan-***@public.gmane.org> wrote:
>
>> This was a thread from a few years ago which turned into a discussion of
>> acceptance tests, end-to-end tests, system tests, and where people might
>> test what. I thought there were a lot of helpful (to me, at least) stories
>> of how people migrate acceptance tests from end-to-end to directly testing
>> the domain objects, and migrating end-to-end acceptance tests into a suite
>> of system tests.
>>
>> Now, a few years later, I'm wondering if anyone has any new insights or
>> things they've tried that is working (or not) for them?
>>
>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
>> experiment to build acceptance tests (in Cucumber) that can be run either
>> end-to-end with a browser or using lower-level objects by changing an
>> environment variable. Here's a link to the talk:
>> https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
>>
>>
>> One thing I've learned is that when builds get long enough, it's now easy
>> enough to set up the build to be run in parallel. I've seen services that
>> do this, and other implementations done by hand to get the same effect. My
>> take is that this can get you some substantial gains quickly in terms of
>> cutting total build time, but these end-to-end tests often (but not always)
>> tend to flicker the most (as compared to unit tests or some integration
>> tests). Thinking more about what these tests are actually covering and
>> what level of test is appropriate and rewriting or refactoring seems to be
>> working well from what I've tried so far.
>>
>> Anyone else?
>>
>> Cheers,
>> Dan
>>
>>
>> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>>>
>>> > What would the acceptance tests look like if they aren't end-to-end?
>>>
>>> I have been considering this also. I often fall into the trap of having
>>> controller-level abstractions influence design, so I wondered about writing
>>> the UI adapter *last*.
>>>
>>> Perhaps these are *eventual*-end-to-end tests.
>>>
>>> <bb />
>>>
>>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>>>
>>>> Steve and J.B., thank you - your insights just gave me an "A-ha!"
>>>> moment about acceptance tests vs. integration tests. Does what I describe
>>>> below align with your philosophies, or have I missed something?
>>>>
>>>> I kept thinking that the style described in the book was to prefer
>>>> acceptance tests to *always* be end-to-end tests. The book says this (p.
>>>> 10): "Our preferred implementation of the "role" of acceptance testing is
>>>> to write end-to-end tests which, as we just noted, should be as end-to-end
>>>> as possible, our bias often leads us to use these terms interchangeably
>>>> although, in some cases, acceptance tests might not be end-to-end."
>>>>
>>>> Then I saw Steve say this:
>>>>
>>>>> The trick is not to do too much through the top-level tests, otherwise
>>>>> they grow out of control. In your case, probably there would be a smaller
>>>>> number of top-level tests that show the main paths through the system
>>>>
>>>>
>>>> Which made me start thinking that there was likely a point in the
>>>> project's lifespan where the cost of running or writing those acceptance
>>>> tests as end-to-end tests became too high. Perhaps it's the length of the
>>>> tests, perhaps the team has grown confident enough with their progress that
>>>> they know when they need an end-to-end test and when they don't.
>>>>
>>>> Then I saw J.B. say this, which gives me some confirmation that others
>>>> are also eventually switching the default style of acceptance tests from
>>>> end-to-end to something ... smaller.
>>>>
>>>> I slowly move end to end tests into a system test suite, then replace
>>>>> them with isolated tests to check each layer more carefully.
>>>>> I principally avoid exhaustive end to end tests to show basic
>>>>> correctness; I save them for system level problems, like response time or
>>>>> scaling.
>>>>
>>>>
>>>> So then I wondered, "What would the acceptance tests look like if they
>>>> aren't end-to-end?" One idea I had was, "Use the ports and adapters
>>>> architecture and swap out some of the production adapters with
>>>> test-specific adapters." For example, replace the production database with
>>>> versions that store their data in-memory.
>>>>
>>>> I've spent the last eight years primarily working on web applications,
>>>> so when I hear as "end-to-end" as possible, I think of exercising the
>>>> browser using a tool like Selenium / WebDriver. Writing a new acceptance
>>>> test for every new feature in a long-living and expanding system using
>>>> Selenium gives me some pause. Perhaps when the team decides that not all
>>>> new acceptance tests need to be fully end-to-end, there might be other ways
>>>> to handle that - perhaps replace the UI adapter with something else, say a
>>>> headless browser like HtmlUnit or something.
>>>>
>>>> And as I write this, it also occurs to me that if in fact a large
>>>> webapp grows over several years, then perhaps this is an indicator to break
>>>> up that system into smaller webapps or systems or other units.
>>>>
>>>>
>>>> Am I off my rocker?
>>>>
>>>> Dan
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <
>>>> jbrain...-***@public.gmane.org> wrote:
>>>>
>>>>> I find success in combining a few end to end tests for basic flow and
>>>>> perhaps one key error flow with exhaustive tiny isolated tests.
>>>>>
>>>>> If I'm not sure where I'm going, then I typically start with end to
>>>>> end tests, then when they start to duplicate themselves too much, I dive
>>>>> down.
>>>>>
>>>>> I slowly move end to end tests into a system test suite, then replace
>>>>> them with isolated tests to check each layer more carefully.
>>>>>
>>>>> I principally avoid exhaustive end to end tests to show basic
>>>>> correctness; I save them for system level problems, like response time or
>>>>> scaling.
>>>>>
>>>>> J. B. Rainsberger :: jbrains.ca
>>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
>>>>>
>>>>>> Thanks Steve, this begins to answer some of the questions I have been
>>>>>> having lately as well. I just watched your "Fractal TDD" video, and it
>>>>>> touched on this quite a bit
>>>>>>
>>>>>> On a recent project, I had it quite easy in that I did not have any
>>>>>> real external dependencies to worry about. It was actually a console
>>>>>> application. As described by GOOS, for every new feature I was wanting to
>>>>>> add, I would write an end-to-end test that exercised the feature, and then
>>>>>> I would drive down to where the classes were implementing things all the
>>>>>> way down (like tortoises). I was not sure about this because at each level
>>>>>> I felt like I was kind of duplicating tests. Because there were tests at
>>>>>> each level, I felt like the tests were too coupled with my design.
>>>>>>
>>>>>> Then I started thinking that maybe I just needed 2 tests, one on the
>>>>>> very top, an acceptance test that shows an example of feature, then ones
>>>>>> right where there were actually changes happening.
>>>>>>
>>>>>> So, first question... Is this right? It felt better. You are saying
>>>>>> that you should just write the full end-to-end acceptance test on a few of
>>>>>> the main paths, like the main "Happy Cases", or the ones that provide the
>>>>>> most risk, then for the others drive just a little deeper so you don't have
>>>>>> to take everything through the brittle outter layers, past your ports and
>>>>>> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>>>
>>>>>> On another project I was working on, this was much more the case. I
>>>>>> would actually test through a mocked version of my "port" to test
>>>>>> "edge-to-edge". I wondered at that time how much the interface of my port
>>>>>> should resemble the external resource I was adapting. From what I have
>>>>>> seen, you are saying it should not, it should more reflect the needs of
>>>>>> your domain. So, for my second question...
>>>>>>
>>>>>> As I worked this way, I would quite often find code that WANTED to
>>>>>> live in this boundary between my domain classes and my adapters, or in the
>>>>>> adapter itself. This adapter area became dangerous ground. I had a hard
>>>>>> time budding off classes in between in a tested way. I actually struggle
>>>>>> with budding off in-between classes pretty badly in general. All of my
>>>>>> tests seem to be so tied to the interfaces of their collaborators that to
>>>>>> bud something in-between makes everything break. It is probably a more
>>>>>> systemic problem now that I think of it. Anyway, it is a real problem in
>>>>>> particular for this adapter area because as I learn more about the external
>>>>>> resource I am using and what I am required to do with it, code collects in
>>>>>> this adapter layer that seems untestable. Perhaps that is where I need to
>>>>>> be doing "integration tests" that I have been avoiding?
>>>>>>
>>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>>>
>>>>>>> That sounds reasonable.
>>>>>>>
>>>>>>> The trick is not to do too much through the top-level tests,
>>>>>>> otherwise they grow out of control. In your case, probably there would be a
>>>>>>> smaller number of top-level tests that show the main paths through the
>>>>>>> system: type in a name, show what happens when a result is or is not found.
>>>>>>> Then I'd probably have more detailed testing lower down to exercise the
>>>>>>> components that did the phonetic look up, and the translation to
>>>>>>> coordinates.
>>>>>>>
>>>>>>> S
>>>>>>>
>>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>>>> > I work with Geographic Information Systems, and most of all system
>>>>>>> I
>>>>>>> > develop have a strong link with user interface.
>>>>>>> >
>>>>>>> > The doubt is: when i said that my system must show a geographic
>>>>>>> location
>>>>>>> > through a phonetic search.
>>>>>>> >
>>>>>>> > I will start my develop, creating a test that valid if a user
>>>>>>> search, show
>>>>>>> > the correctly geolocation on the map.
>>>>>>> >
>>>>>>> > Through the creation of this test, i will create a web page with a
>>>>>>> test
>>>>>>> > box, then the search will fail, then i implement a test for
>>>>>>> phonetic search
>>>>>>> > until this not fail anymore, after that i will creating my
>>>>>>> application step
>>>>>>> > by step, ever failing some functionality on user interface.
>>>>>>> >
>>>>>>> > Is this the right concept?
>>>>>>> >
>>>>>>> > About the rules I have mentioned, i believe that most of then are
>>>>>>> around
>>>>>>> > the best pratices to develop Object Oriented SW, ever considering
>>>>>>> the
>>>>>>> > cohesion and coupling.
>>>>>>> >
>>>>>>> > Very tks for the answer...
>>>>>>> >
>>>>>>> >
>>>>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>>>> >>
>>>>>>> >>> I'm reading the book, and the main doubt is, how to start a
>>>>>>> software
>>>>>>> >> development?
>>>>>>> >>>
>>>>>>> >>> I would to know if I must start to write a test using selennium
>>>>>>> >> framework, because the systems that I developed ever satisfy an
>>>>>>> users that
>>>>>>> >> need to put some information through some text boxs.
>>>>>>> >>
>>>>>>> >> that might be a good start. What would you like to show that the
>>>>>>> system
>>>>>>> >> can do?
>>>>>>> >>
>>>>>>> >>> Is this a main rule that all of us must to follow?
>>>>>>> >>
>>>>>>> >> there are few rules that everyone must follow.
>>>>>>> >
>>>>>>>
>>>>>>> Steve Freeman
>>>>>>>
>>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>>>
>>>>>>> +44 797 179 4105
>>>>>>> Twitter: @sf105
>>>>>>> Higher Order Logic Limited
>>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>>>> Company registered in England & Wales. Number 7522677
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> http://www.natpryce.com
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>



--
http://www.natpryce.com

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Josue Barbosa dos Santos
2014-08-11 11:18:32 UTC
Permalink
>>I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
experiment to build acceptance tests (in Cucumber) that can be >>run either
end-to-end with a browser or using lower-level objects by changing an
environment variable.

I my job we do exactly this. And it is not exactly a new idea. I think I
read about this strategy in Lisa Crispin Book: Testing Extreme Programming
(2002).

--
Abraços,
Josué
http://twitter.com/josuesantos

On Sun, Aug 10, 2014 at 8:38 PM, Daniel Wellman <etldan-***@public.gmane.org> wrote:

> This was a thread from a few years ago which turned into a discussion of
> acceptance tests, end-to-end tests, system tests, and where people might
> test what. I thought there were a lot of helpful (to me, at least) stories
> of how people migrate acceptance tests from end-to-end to directly testing
> the domain objects, and migrating end-to-end acceptance tests into a suite
> of system tests.
>
> Now, a few years later, I'm wondering if anyone has any new insights or
> things they've tried that is working (or not) for them?
>
> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
> experiment to build acceptance tests (in Cucumber) that can be run either
> end-to-end with a browser or using lower-level objects by changing an
> environment variable. Here's a link to the talk:
> https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
>
>
> One thing I've learned is that when builds get long enough, it's now easy
> enough to set up the build to be run in parallel. I've seen services that
> do this, and other implementations done by hand to get the same effect. My
> take is that this can get you some substantial gains quickly in terms of
> cutting total build time, but these end-to-end tests often (but not always)
> tend to flicker the most (as compared to unit tests or some integration
> tests). Thinking more about what these tests are actually covering and
> what level of test is appropriate and rewriting or refactoring seems to be
> working well from what I've tried so far.
>
> Anyone else?
>
> Cheers,
> Dan
>
>
> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>>
>> > What would the acceptance tests look like if they aren't end-to-end?
>>
>> I have been considering this also. I often fall into the trap of having
>> controller-level abstractions influence design, so I wondered about writing
>> the UI adapter *last*.
>>
>> Perhaps these are *eventual*-end-to-end tests.
>>
>> <bb />
>>
>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>>
>>> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment
>>> about acceptance tests vs. integration tests. Does what I describe below
>>> align with your philosophies, or have I missed something?
>>>
>>> I kept thinking that the style described in the book was to prefer
>>> acceptance tests to *always* be end-to-end tests. The book says this (p.
>>> 10): "Our preferred implementation of the "role" of acceptance testing is
>>> to write end-to-end tests which, as we just noted, should be as end-to-end
>>> as possible, our bias often leads us to use these terms interchangeably
>>> although, in some cases, acceptance tests might not be end-to-end."
>>>
>>> Then I saw Steve say this:
>>>
>>>> The trick is not to do too much through the top-level tests, otherwise
>>>> they grow out of control. In your case, probably there would be a smaller
>>>> number of top-level tests that show the main paths through the system
>>>
>>>
>>> Which made me start thinking that there was likely a point in the
>>> project's lifespan where the cost of running or writing those acceptance
>>> tests as end-to-end tests became too high. Perhaps it's the length of the
>>> tests, perhaps the team has grown confident enough with their progress that
>>> they know when they need an end-to-end test and when they don't.
>>>
>>> Then I saw J.B. say this, which gives me some confirmation that others
>>> are also eventually switching the default style of acceptance tests from
>>> end-to-end to something ... smaller.
>>>
>>> I slowly move end to end tests into a system test suite, then replace
>>>> them with isolated tests to check each layer more carefully.
>>>> I principally avoid exhaustive end to end tests to show basic
>>>> correctness; I save them for system level problems, like response time or
>>>> scaling.
>>>
>>>
>>> So then I wondered, "What would the acceptance tests look like if they
>>> aren't end-to-end?" One idea I had was, "Use the ports and adapters
>>> architecture and swap out some of the production adapters with
>>> test-specific adapters." For example, replace the production database with
>>> versions that store their data in-memory.
>>>
>>> I've spent the last eight years primarily working on web applications,
>>> so when I hear as "end-to-end" as possible, I think of exercising the
>>> browser using a tool like Selenium / WebDriver. Writing a new acceptance
>>> test for every new feature in a long-living and expanding system using
>>> Selenium gives me some pause. Perhaps when the team decides that not all
>>> new acceptance tests need to be fully end-to-end, there might be other ways
>>> to handle that - perhaps replace the UI adapter with something else, say a
>>> headless browser like HtmlUnit or something.
>>>
>>> And as I write this, it also occurs to me that if in fact a large webapp
>>> grows over several years, then perhaps this is an indicator to break up
>>> that system into smaller webapps or systems or other units.
>>>
>>>
>>> Am I off my rocker?
>>>
>>> Dan
>>>
>>>
>>>
>>>
>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <jbrain...-***@public.gmane.orgm
>>> > wrote:
>>>
>>>> I find success in combining a few end to end tests for basic flow and
>>>> perhaps one key error flow with exhaustive tiny isolated tests.
>>>>
>>>> If I'm not sure where I'm going, then I typically start with end to end
>>>> tests, then when they start to duplicate themselves too much, I dive down.
>>>>
>>>> I slowly move end to end tests into a system test suite, then replace
>>>> them with isolated tests to check each layer more carefully.
>>>>
>>>> I principally avoid exhaustive end to end tests to show basic
>>>> correctness; I save them for system level problems, like response time or
>>>> scaling.
>>>>
>>>> J. B. Rainsberger :: jbrains.ca
>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
>>>>
>>>>> Thanks Steve, this begins to answer some of the questions I have been
>>>>> having lately as well. I just watched your "Fractal TDD" video, and it
>>>>> touched on this quite a bit
>>>>>
>>>>> On a recent project, I had it quite easy in that I did not have any
>>>>> real external dependencies to worry about. It was actually a console
>>>>> application. As described by GOOS, for every new feature I was wanting to
>>>>> add, I would write an end-to-end test that exercised the feature, and then
>>>>> I would drive down to where the classes were implementing things all the
>>>>> way down (like tortoises). I was not sure about this because at each level
>>>>> I felt like I was kind of duplicating tests. Because there were tests at
>>>>> each level, I felt like the tests were too coupled with my design.
>>>>>
>>>>> Then I started thinking that maybe I just needed 2 tests, one on the
>>>>> very top, an acceptance test that shows an example of feature, then ones
>>>>> right where there were actually changes happening.
>>>>>
>>>>> So, first question... Is this right? It felt better. You are saying
>>>>> that you should just write the full end-to-end acceptance test on a few of
>>>>> the main paths, like the main "Happy Cases", or the ones that provide the
>>>>> most risk, then for the others drive just a little deeper so you don't have
>>>>> to take everything through the brittle outter layers, past your ports and
>>>>> adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>>
>>>>> On another project I was working on, this was much more the case. I
>>>>> would actually test through a mocked version of my "port" to test
>>>>> "edge-to-edge". I wondered at that time how much the interface of my port
>>>>> should resemble the external resource I was adapting. From what I have
>>>>> seen, you are saying it should not, it should more reflect the needs of
>>>>> your domain. So, for my second question...
>>>>>
>>>>> As I worked this way, I would quite often find code that WANTED to
>>>>> live in this boundary between my domain classes and my adapters, or in the
>>>>> adapter itself. This adapter area became dangerous ground. I had a hard
>>>>> time budding off classes in between in a tested way. I actually struggle
>>>>> with budding off in-between classes pretty badly in general. All of my
>>>>> tests seem to be so tied to the interfaces of their collaborators that to
>>>>> bud something in-between makes everything break. It is probably a more
>>>>> systemic problem now that I think of it. Anyway, it is a real problem in
>>>>> particular for this adapter area because as I learn more about the external
>>>>> resource I am using and what I am required to do with it, code collects in
>>>>> this adapter layer that seems untestable. Perhaps that is where I need to
>>>>> be doing "integration tests" that I have been avoiding?
>>>>>
>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>>
>>>>>> That sounds reasonable.
>>>>>>
>>>>>> The trick is not to do too much through the top-level tests,
>>>>>> otherwise they grow out of control. In your case, probably there would be a
>>>>>> smaller number of top-level tests that show the main paths through the
>>>>>> system: type in a name, show what happens when a result is or is not found.
>>>>>> Then I'd probably have more detailed testing lower down to exercise the
>>>>>> components that did the phonetic look up, and the translation to
>>>>>> coordinates.
>>>>>>
>>>>>> S
>>>>>>
>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>>> > I work with Geographic Information Systems, and most of all system
>>>>>> I
>>>>>> > develop have a strong link with user interface.
>>>>>> >
>>>>>> > The doubt is: when i said that my system must show a geographic
>>>>>> location
>>>>>> > through a phonetic search.
>>>>>> >
>>>>>> > I will start my develop, creating a test that valid if a user
>>>>>> search, show
>>>>>> > the correctly geolocation on the map.
>>>>>> >
>>>>>> > Through the creation of this test, i will create a web page with a
>>>>>> test
>>>>>> > box, then the search will fail, then i implement a test for
>>>>>> phonetic search
>>>>>> > until this not fail anymore, after that i will creating my
>>>>>> application step
>>>>>> > by step, ever failing some functionality on user interface.
>>>>>> >
>>>>>> > Is this the right concept?
>>>>>> >
>>>>>> > About the rules I have mentioned, i believe that most of then are
>>>>>> around
>>>>>> > the best pratices to develop Object Oriented SW, ever considering
>>>>>> the
>>>>>> > cohesion and coupling.
>>>>>> >
>>>>>> > Very tks for the answer...
>>>>>> >
>>>>>> >
>>>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>>> >>
>>>>>> >>> I'm reading the book, and the main doubt is, how to start a
>>>>>> software
>>>>>> >> development?
>>>>>> >>>
>>>>>> >>> I would to know if I must start to write a test using selennium
>>>>>> >> framework, because the systems that I developed ever satisfy an
>>>>>> users that
>>>>>> >> need to put some information through some text boxs.
>>>>>> >>
>>>>>> >> that might be a good start. What would you like to show that the
>>>>>> system
>>>>>> >> can do?
>>>>>> >>
>>>>>> >>> Is this a main rule that all of us must to follow?
>>>>>> >>
>>>>>> >> there are few rules that everyone must follow.
>>>>>> >
>>>>>>
>>>>>> Steve Freeman
>>>>>>
>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>>
>>>>>> +44 797 179 4105
>>>>>> Twitter: @sf105
>>>>>> Higher Order Logic Limited
>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>>> Company registered in England & Wales. Number 7522677
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Steve Freeman
2014-08-11 13:50:13 UTC
Permalink
The idea has been around for ages although not many people actually do it.

S.

Steve Freeman
http://www.higherorderlogic.com

Written on a phone, so please allow for typos and short content.

On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos <josuesantos-***@public.gmane.org> wrote:

> >>I saw a video of Seb Rose's talk at Cuke Up about how he's trying an experiment to build acceptance tests (in Cucumber) that can be >>run either end-to-end with a browser or using lower-level objects by changing an environment variable.
>
> I my job we do exactly this. And it is not exactly a new idea. I think I read about this strategy in Lisa Crispin Book: Testing Extreme Programming (2002).
>
> --
> Abraços,
> Josué
> http://twitter.com/josuesantos
>
> On Sun, Aug 10, 2014 at 8:38 PM, Daniel Wellman <etldan-***@public.gmane.org> wrote:
>> This was a thread from a few years ago which turned into a discussion of acceptance tests, end-to-end tests, system tests, and where people might test what. I thought there were a lot of helpful (to me, at least) stories of how people migrate acceptance tests from end-to-end to directly testing the domain objects, and migrating end-to-end acceptance tests into a suite of system tests.
>>
>> Now, a few years later, I'm wondering if anyone has any new insights or things they've tried that is working (or not) for them?
>>
>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an experiment to build acceptance tests (in Cucumber) that can be run either end-to-end with a browser or using lower-level objects by changing an environment variable. Here's a link to the talk: https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
>>
>>
>> One thing I've learned is that when builds get long enough, it's now easy enough to set up the build to be run in parallel. I've seen services that do this, and other implementations done by hand to get the same effect. My take is that this can get you some substantial gains quickly in terms of cutting total build time, but these end-to-end tests often (but not always) tend to flicker the most (as compared to unit tests or some integration tests). Thinking more about what these tests are actually covering and what level of test is appropriate and rewriting or refactoring seems to be working well from what I've tried so far.
>>
>> Anyone else?
>>
>> Cheers,
>> Dan
>>
>>
>> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>>>
>>> > What would the acceptance tests look like if they aren't end-to-end?
>>>
>>> I have been considering this also. I often fall into the trap of having controller-level abstractions influence design, so I wondered about writing the UI adapter last.
>>>
>>> Perhaps these are eventual-end-to-end tests.
>>>
>>> <bb />
>>>
>>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>>>
>>>> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment about acceptance tests vs. integration tests. Does what I describe below align with your philosophies, or have I missed something?
>>>>
>>>> I kept thinking that the style described in the book was to prefer acceptance tests to *always* be end-to-end tests. The book says this (p. 10): "Our preferred implementation of the "role" of acceptance testing is to write end-to-end tests which, as we just noted, should be as end-to-end as possible, our bias often leads us to use these terms interchangeably although, in some cases, acceptance tests might not be end-to-end."
>>>>
>>>> Then I saw Steve say this:
>>>>> The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system
>>>>
>>>> Which made me start thinking that there was likely a point in the project's lifespan where the cost of running or writing those acceptance tests as end-to-end tests became too high. Perhaps it's the length of the tests, perhaps the team has grown confident enough with their progress that they know when they need an end-to-end test and when they don't.
>>>>
>>>> Then I saw J.B. say this, which gives me some confirmation that others are also eventually switching the default style of acceptance tests from end-to-end to something ... smaller.
>>>>
>>>>> I slowly move end to end tests into a system test suite, then replace them with isolated tests to check each layer more carefully.
>>>>> I principally avoid exhaustive end to end tests to show basic correctness; I save them for system level problems, like response time or scaling.
>>>>
>>>>
>>>> So then I wondered, "What would the acceptance tests look like if they aren't end-to-end?" One idea I had was, "Use the ports and adapters architecture and swap out some of the production adapters with test-specific adapters." For example, replace the production database with versions that store their data in-memory.
>>>>
>>>> I've spent the last eight years primarily working on web applications, so when I hear as "end-to-end" as possible, I think of exercising the browser using a tool like Selenium / WebDriver. Writing a new acceptance test for every new feature in a long-living and expanding system using Selenium gives me some pause. Perhaps when the team decides that not all new acceptance tests need to be fully end-to-end, there might be other ways to handle that - perhaps replace the UI adapter with something else, say a headless browser like HtmlUnit or something.
>>>>
>>>> And as I write this, it also occurs to me that if in fact a large webapp grows over several years, then perhaps this is an indicator to break up that system into smaller webapps or systems or other units.
>>>>
>>>>
>>>> Am I off my rocker?
>>>>
>>>> Dan
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <***@gmail.com> wrote:
>>>>> I find success in combining a few end to end tests for basic flow and perhaps one key error flow with exhaustive tiny isolated tests.
>>>>>
>>>>> If I'm not sure where I'm going, then I typically start with end to end tests, then when they start to duplicate themselves too much, I dive down.
>>>>>
>>>>> I slowly move end to end tests into a system test suite, then replace them with isolated tests to check each layer more carefully.
>>>>>
>>>>> I principally avoid exhaustive end to end tests to show basic correctness; I save them for system level problems, like response time or scaling.
>>>>>
>>>>> J. B. Rainsberger :: jbrains.ca
>>>>>
>>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
>>>>>> Thanks Steve, this begins to answer some of the questions I have been having lately as well. I just watched your "Fractal TDD" video, and it touched on this quite a bit
>>>>>>
>>>>>> On a recent project, I had it quite easy in that I did not have any real external dependencies to worry about. It was actually a console application. As described by GOOS, for every new feature I was wanting to add, I would write an end-to-end test that exercised the feature, and then I would drive down to where the classes were implementing things all the way down (like tortoises). I was not sure about this because at each level I felt like I was kind of duplicating tests. Because there were tests at each level, I felt like the tests were too coupled with my design.
>>>>>>
>>>>>> Then I started thinking that maybe I just needed 2 tests, one on the very top, an acceptance test that shows an example of feature, then ones right where there were actually changes happening.
>>>>>>
>>>>>> So, first question... Is this right? It felt better. You are saying that you should just write the full end-to-end acceptance test on a few of the main paths, like the main "Happy Cases", or the ones that provide the most risk, then for the others drive just a little deeper so you don't have to take everything through the brittle outter layers, past your ports and adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>>>
>>>>>> On another project I was working on, this was much more the case. I would actually test through a mocked version of my "port" to test "edge-to-edge". I wondered at that time how much the interface of my port should resemble the external resource I was adapting. From what I have seen, you are saying it should not, it should more reflect the needs of your domain. So, for my second question...
>>>>>>
>>>>>> As I worked this way, I would quite often find code that WANTED to live in this boundary between my domain classes and my adapters, or in the adapter itself. This adapter area became dangerous ground. I had a hard time budding off classes in between in a tested way. I actually struggle with budding off in-between classes pretty badly in general. All of my tests seem to be so tied to the interfaces of their collaborators that to bud something in-between makes everything break. It is probably a more systemic problem now that I think of it. Anyway, it is a real problem in particular for this adapter area because as I learn more about the external resource I am using and what I am required to do with it, code collects in this adapter layer that seems untestable. Perhaps that is where I need to be doing "integration tests" that I have been avoiding?
>>>>>>
>>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>>>
>>>>>>> That sounds reasonable.
>>>>>>>
>>>>>>> The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system: type in a name, show what happens when a result is or is not found. Then I'd probably have more detailed testing lower down to exercise the components that did the phonetic look up, and the translation to coordinates.
>>>>>>>
>>>>>>> S
>>>>>>>
>>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>>>> > I work with Geographic Information Systems, and most of all system I
>>>>>>> > develop have a strong link with user interface.
>>>>>>> >
>>>>>>> > The doubt is: when i said that my system must show a geographic location
>>>>>>> > through a phonetic search.
>>>>>>> >
>>>>>>> > I will start my develop, creating a test that valid if a user search, show
>>>>>>> > the correctly geolocation on the map.
>>>>>>> >
>>>>>>> > Through the creation of this test, i will create a web page with a test
>>>>>>> > box, then the search will fail, then i implement a test for phonetic search
>>>>>>> > until this not fail anymore, after that i will creating my application step
>>>>>>> > by step, ever failing some functionality on user interface.
>>>>>>> >
>>>>>>> > Is this the right concept?
>>>>>>> >
>>>>>>> > About the rules I have mentioned, i believe that most of then are around
>>>>>>> > the best pratices to develop Object Oriented SW, ever considering the
>>>>>>> > cohesion and coupling.
>>>>>>> >
>>>>>>> > Very tks for the answer...
>>>>>>> >
>>>>>>> >
>>>>>>> > On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>>>> >>
>>>>>>> >>> I'm reading the book, and the main doubt is, how to start a software
>>>>>>> >> development?
>>>>>>> >>>
>>>>>>> >>> I would to know if I must start to write a test using selennium
>>>>>>> >> framework, because the systems that I developed ever satisfy an users that
>>>>>>> >> need to put some information through some text boxs.
>>>>>>> >>
>>>>>>> >> that might be a good start. What would you like to show that the system
>>>>>>> >> can do?
>>>>>>> >>
>>>>>>> >>> Is this a main rule that all of us must to follow?
>>>>>>> >>
>>>>>>> >> there are few rules that everyone must follow.
>>>>>>> >
>>>>>>>
>>>>>>> Steve Freeman
>>>>>>>
>>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>>>
>>>>>>> +44 797 179 4105
>>>>>>> Twitter: @sf105
>>>>>>> Higher Order Logic Limited
>>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>>>> Company registered in England & Wales. Number 7522677
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Malte Finsterwalder
2014-08-14 07:15:08 UTC
Permalink
If not a lot of people do it, that may be a hint.
Why aren't more people using this approach?
Is it too much work with too little payoff?
Is it just too complicated?

Greetings,
Malte

On 11.08.2014 15:50, Steve Freeman wrote:
> The idea has been around for ages although not many people actually do it.
>
> S.
>
> Steve Freeman
> http://www.higherorderlogic.com
>
> Written on a phone, so please allow for typos and short content.
>
> On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos <josuesantos-***@public.gmane.org> wrote:
>
>>>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an experiment to build acceptance tests (in Cucumber) that can be >>run either end-to-end with a browser or using lower-level objects by changing an environment variable.
>>
>> I my job we do exactly this. And it is not exactly a new idea. I think I read about this strategy in Lisa Crispin Book: Testing Extreme Programming (2002).
>>
>> --
>> Abraços,
>> Josué
>> http://twitter.com/josuesantos
>>
>> On Sun, Aug 10, 2014 at 8:38 PM, Daniel Wellman <etldan-***@public.gmane.org> wrote:
>>> This was a thread from a few years ago which turned into a discussion of acceptance tests, end-to-end tests, system tests, and where people might test what. I thought there were a lot of helpful (to me, at least) stories of how people migrate acceptance tests from end-to-end to directly testing the domain objects, and migrating end-to-end acceptance tests into a suite of system tests.
>>>
>>> Now, a few years later, I'm wondering if anyone has any new insights or things they've tried that is working (or not) for them?
>>>
>>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an experiment to build acceptance tests (in Cucumber) that can be run either end-to-end with a browser or using lower-level objects by changing an environment variable. Here's a link to the talk: https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
>>>
>>>
>>> One thing I've learned is that when builds get long enough, it's now easy enough to set up the build to be run in parallel. I've seen services that do this, and other implementations done by hand to get the same effect. My take is that this can get you some substantial gains quickly in terms of cutting total build time, but these end-to-end tests often (but not always) tend to flicker the most (as compared to unit tests or some integration tests). Thinking more about what these tests are actually covering and what level of test is appropriate and rewriting or refactoring seems to be working well from what I've tried so far.
>>>
>>> Anyone else?
>>>
>>> Cheers,
>>> Dan
>>>
>>>
>>> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
>>>>
>>>>> What would the acceptance tests look like if they aren't end-to-end?
>>>>
>>>> I have been considering this also. I often fall into the trap of having controller-level abstractions influence design, so I wondered about writing the UI adapter last.
>>>>
>>>> Perhaps these are eventual-end-to-end tests.
>>>>
>>>> <bb />
>>>>
>>>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
>>>>>
>>>>> Steve and J.B., thank you - your insights just gave me an "A-ha!" moment about acceptance tests vs. integration tests. Does what I describe below align with your philosophies, or have I missed something?
>>>>>
>>>>> I kept thinking that the style described in the book was to prefer acceptance tests to *always* be end-to-end tests. The book says this (p. 10): "Our preferred implementation of the "role" of acceptance testing is to write end-to-end tests which, as we just noted, should be as end-to-end as possible, our bias often leads us to use these terms interchangeably although, in some cases, acceptance tests might not be end-to-end."
>>>>>
>>>>> Then I saw Steve say this:
>>>>>> The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system
>>>>>
>>>>> Which made me start thinking that there was likely a point in the project's lifespan where the cost of running or writing those acceptance tests as end-to-end tests became too high. Perhaps it's the length of the tests, perhaps the team has grown confident enough with their progress that they know when they need an end-to-end test and when they don't.
>>>>>
>>>>> Then I saw J.B. say this, which gives me some confirmation that others are also eventually switching the default style of acceptance tests from end-to-end to something ... smaller.
>>>>>
>>>>>> I slowly move end to end tests into a system test suite, then replace them with isolated tests to check each layer more carefully.
>>>>>> I principally avoid exhaustive end to end tests to show basic correctness; I save them for system level problems, like response time or scaling.
>>>>>
>>>>>
>>>>> So then I wondered, "What would the acceptance tests look like if they aren't end-to-end?" One idea I had was, "Use the ports and adapters architecture and swap out some of the production adapters with test-specific adapters." For example, replace the production database with versions that store their data in-memory.
>>>>>
>>>>> I've spent the last eight years primarily working on web applications, so when I hear as "end-to-end" as possible, I think of exercising the browser using a tool like Selenium / WebDriver. Writing a new acceptance test for every new feature in a long-living and expanding system using Selenium gives me some pause. Perhaps when the team decides that not all new acceptance tests need to be fully end-to-end, there might be other ways to handle that - perhaps replace the UI adapter with something else, say a headless browser like HtmlUnit or something.
>>>>>
>>>>> And as I write this, it also occurs to me that if in fact a large webapp grows over several years, then perhaps this is an indicator to break up that system into smaller webapps or systems or other units.
>>>>>
>>>>>
>>>>> Am I off my rocker?
>>>>>
>>>>> Dan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <***@gmail.com> wrote:
>>>>>> I find success in combining a few end to end tests for basic flow and perhaps one key error flow with exhaustive tiny isolated tests.
>>>>>>
>>>>>> If I'm not sure where I'm going, then I typically start with end to end tests, then when they start to duplicate themselves too much, I dive down.
>>>>>>
>>>>>> I slowly move end to end tests into a system test suite, then replace them with isolated tests to check each layer more carefully.
>>>>>>
>>>>>> I principally avoid exhaustive end to end tests to show basic correctness; I save them for system level problems, like response time or scaling.
>>>>>>
>>>>>> J. B. Rainsberger :: jbrains.ca
>>>>>>
>>>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
>>>>>>> Thanks Steve, this begins to answer some of the questions I have been having lately as well. I just watched your "Fractal TDD" video, and it touched on this quite a bit
>>>>>>>
>>>>>>> On a recent project, I had it quite easy in that I did not have any real external dependencies to worry about. It was actually a console application. As described by GOOS, for every new feature I was wanting to add, I would write an end-to-end test that exercised the feature, and then I would drive down to where the classes were implementing things all the way down (like tortoises). I was not sure about this because at each level I felt like I was kind of duplicating tests. Because there were tests at each level, I felt like the tests were too coupled with my design.
>>>>>>>
>>>>>>> Then I started thinking that maybe I just needed 2 tests, one on the very top, an acceptance test that shows an example of feature, then ones right where there were actually changes happening.
>>>>>>>
>>>>>>> So, first question... Is this right? It felt better. You are saying that you should just write the full end-to-end acceptance test on a few of the main paths, like the main "Happy Cases", or the ones that provide the most risk, then for the others drive just a little deeper so you don't have to take everything through the brittle outter layers, past your ports and adapters to what I would call "edge-to-edge" rather than "end-to-end".
>>>>>>>
>>>>>>> On another project I was working on, this was much more the case. I would actually test through a mocked version of my "port" to test "edge-to-edge". I wondered at that time how much the interface of my port should resemble the external resource I was adapting. From what I have seen, you are saying it should not, it should more reflect the needs of your domain. So, for my second question...
>>>>>>>
>>>>>>> As I worked this way, I would quite often find code that WANTED to live in this boundary between my domain classes and my adapters, or in the adapter itself. This adapter area became dangerous ground. I had a hard time budding off classes in between in a tested way. I actually struggle with budding off in-between classes pretty badly in general. All of my tests seem to be so tied to the interfaces of their collaborators that to bud something in-between makes everything break. It is probably a more systemic problem now that I think of it. Anyway, it is a real problem in particular for this adapter area because as I learn more about the external resource I am using and what I am required to do with it, code collects in this adapter layer that seems untestable. Perhaps that is where I need to be doing "integration tests" that I have been avoiding?
>>>>>>>
>>>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
>>>>>>>>
>>>>>>>> That sounds reasonable.
>>>>>>>>
>>>>>>>> The trick is not to do too much through the top-level tests, otherwise they grow out of control. In your case, probably there would be a smaller number of top-level tests that show the main paths through the system: type in a name, show what happens when a result is or is not found. Then I'd probably have more detailed testing lower down to exercise the components that did the phonetic look up, and the translation to coordinates.
>>>>>>>>
>>>>>>>> S
>>>>>>>>
>>>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
>>>>>>>>> I work with Geographic Information Systems, and most of all system I
>>>>>>>>> develop have a strong link with user interface.
>>>>>>>>>
>>>>>>>>> The doubt is: when i said that my system must show a geographic location
>>>>>>>>> through a phonetic search.
>>>>>>>>>
>>>>>>>>> I will start my develop, creating a test that valid if a user search, show
>>>>>>>>> the correctly geolocation on the map.
>>>>>>>>>
>>>>>>>>> Through the creation of this test, i will create a web page with a test
>>>>>>>>> box, then the search will fail, then i implement a test for phonetic search
>>>>>>>>> until this not fail anymore, after that i will creating my application step
>>>>>>>>> by step, ever failing some functionality on user interface.
>>>>>>>>>
>>>>>>>>> Is this the right concept?
>>>>>>>>>
>>>>>>>>> About the rules I have mentioned, i believe that most of then are around
>>>>>>>>> the best pratices to develop Object Oriented SW, ever considering the
>>>>>>>>> cohesion and coupling.
>>>>>>>>>
>>>>>>>>> Very tks for the answer...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
>>>>>>>>>>
>>>>>>>>>>> I'm reading the book, and the main doubt is, how to start a software
>>>>>>>>>> development?
>>>>>>>>>>>
>>>>>>>>>>> I would to know if I must start to write a test using selennium
>>>>>>>>>> framework, because the systems that I developed ever satisfy an users that
>>>>>>>>>> need to put some information through some text boxs.
>>>>>>>>>>
>>>>>>>>>> that might be a good start. What would you like to show that the system
>>>>>>>>>> can do?
>>>>>>>>>>
>>>>>>>>>>> Is this a main rule that all of us must to follow?
>>>>>>>>>>
>>>>>>>>>> there are few rules that everyone must follow.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Steve Freeman
>>>>>>>>
>>>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
>>>>>>>> Book: http://www.growing-object-oriented-software.com
>>>>>>>>
>>>>>>>> +44 797 179 4105
>>>>>>>> Twitter: @sf105
>>>>>>>> Higher Order Logic Limited
>>>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
>>>>>>>> Company registered in England & Wales. Number 7522677
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>> For more options, visit https://groups.google.com/d/optout.
>

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Steve Freeeman
2014-08-14 07:40:56 UTC
Permalink
I don't have an answer. It might be too hard, but then writing clean code is quite hard too so many people don't :)

S

On 14 Aug 2014, at 08:15, Malte Finsterwalder <malte-***@public.gmane.org> wrote:
> If not a lot of people do it, that may be a hint.
> Why aren't more people using this approach?
> Is it too much work with too little payoff?
> Is it just too complicated?
>
> Greetings,
> Malte
>
> On 11.08.2014 15:50, Steve Freeman wrote:
>> The idea has been around for ages although not many people actually do it.
>>
>> S.
>>
>> Steve Freeman
>> http://www.higherorderlogic.com
>>
>> Written on a phone, so please allow for typos and short content.
>>
>> On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos <josuesantos-***@public.gmane.org> wrote:
>>
>>>>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an experiment to build acceptance tests (in Cucumber) that can be >>run either end-to-end with a browser or using lower-level objects by changing an environment variable.
>>>
>>> I my job we do exactly this. And it is not exactly a new idea. I think I read about this strategy in Lisa Crispin Book: Testing Extreme Programming (2002).

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Nat Pryce
2014-08-14 14:53:22 UTC
Permalink
I often see that teams start with one approach -- testing through the GUI,
for example -- and then bake assumptions into their acceptance tests that
stop them being able to validate the acceptance criteria against isolated
code. For example, they may write an acceptance test for rules in terms of
logging in and stepping through a process that the GUI currently imposes,
when actually what should be being tested are rules that, in the full
system, control or constrain the data and actions made available to the
user by the GUI.


Using the cucmber/BDD lingo, I argue "When considered harmful". If you can
write your acceptance tests in terms of functions or constraints then you
can more easily validate them against different scales of the system.

--Nat


On 14 August 2014 08:40, Steve Freeeman <steve-/6t1Ty3A97Vaa/***@public.gmane.org> wrote:

> I don't have an answer. It might be too hard, but then writing clean code
> is quite hard too so many people don't :)
>
> S
>
> On 14 Aug 2014, at 08:15, Malte Finsterwalder <malte-***@public.gmane.org>
> wrote:
> > If not a lot of people do it, that may be a hint.
> > Why aren't more people using this approach?
> > Is it too much work with too little payoff?
> > Is it just too complicated?
> >
> > Greetings,
> > Malte
> >
> > On 11.08.2014 15:50, Steve Freeman wrote:
> >> The idea has been around for ages although not many people actually do
> it.
> >>
> >> S.
> >>
> >> Steve Freeman
> >> http://www.higherorderlogic.com
> >>
> >> Written on a phone, so please allow for typos and short content.
> >>
> >> On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos <
> josuesantos-***@public.gmane.org> wrote:
> >>
> >>>>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
> experiment to build acceptance tests (in Cucumber) that can be >>run either
> end-to-end with a browser or using lower-level objects by changing an
> environment variable.
> >>>
> >>> I my job we do exactly this. And it is not exactly a new idea. I think
> I read about this strategy in Lisa Crispin Book: Testing Extreme
> Programming (2002).
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>



--
http://www.natpryce.com

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
George Dinwiddie
2014-08-14 15:03:33 UTC
Permalink
Nat,

On 8/14/14 10:53 AM, Nat Pryce wrote:
> I often see that teams start with one approach -- testing through the
> GUI, for example -- and then bake assumptions into their acceptance
> tests that stop them being able to validate the acceptance criteria
> against isolated code. For example, they may write an acceptance test
> for rules in terms of logging in and stepping through a process that the
> GUI currently imposes, when actually what should be being tested are
> rules that, in the full system, control or constrain the data and
> actions made available to the user by the GUI.
>
>
> Using the cucmber/BDD lingo, I argue "When considered harmful". If you
> can write your acceptance tests in terms of functions or constraints
> then you can more easily validate them against different scales of the
> system.

Could you provide examples of acceptance tests expressed in terms of
functions and constraints? I suspect that this is the same orientation
toward interaction rather than state that underlies the "London vs
Detroit" TDD styles.

I also suspect that some will express the same algorithm twice, negating
the usefulness of the test. That's why I'd like to see your examples,
rather than just a description.

- George

>
> --Nat
>
>
> On 14 August 2014 08:40, Steve Freeeman <steve-/6t1Ty3A97Vaa/***@public.gmane.org
> <mailto:steve-/6t1Ty3A97Vaa/***@public.gmane.org>> wrote:
>
> I don't have an answer. It might be too hard, but then writing clean
> code is quite hard too so many people don't :)
>
> S
>
> On 14 Aug 2014, at 08:15, Malte Finsterwalder
> <malte-***@public.gmane.org <mailto:malte-***@public.gmane.org>> wrote:
> > If not a lot of people do it, that may be a hint.
> > Why aren't more people using this approach?
> > Is it too much work with too little payoff?
> > Is it just too complicated?
> >
> > Greetings,
> > Malte
> >
> > On 11.08.2014 15:50, Steve Freeman wrote:
> >> The idea has been around for ages although not many people
> actually do it.
> >>
> >> S.
> >>
> >> Steve Freeman
> >> http://www.higherorderlogic.com
> >>
> >> Written on a phone, so please allow for typos and short content.
> >>
> >> On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos
> <josuesantos-***@public.gmane.org <mailto:josuesantos-***@public.gmane.org>> wrote:
> >>
> >>>>> I saw a video of Seb Rose's talk at Cuke Up about how he's
> trying an experiment to build acceptance tests (in Cucumber) that
> can be >>run either end-to-end with a browser or using lower-level
> objects by changing an environment variable.
> >>>
> >>> I my job we do exactly this. And it is not exactly a new idea.
> I think I read about this strategy in Lisa Crispin Book: Testing
> Extreme Programming (2002).
>
> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to
> growing-object-oriented-software+unsubscribe-/***@public.gmane.org
> <mailto:growing-object-oriented-software%2Bunsubscribe-/***@public.gmane.org>.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> http://www.natpryce.com
>
> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
> growing-object-oriented-software+unsubscribe-/***@public.gmane.org
> <mailto:growing-object-oriented-software+unsubscribe-/***@public.gmane.org>.
> For more options, visit https://groups.google.com/d/optout.

--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Kim Gräsman
2014-08-14 15:18:53 UTC
Permalink
George,

On Thu, Aug 14, 2014 at 5:03 PM, George Dinwiddie
<lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org> wrote:
>
> Could you provide examples of acceptance tests expressed in terms of
> functions and constraints? I suspect that this is the same orientation
> toward interaction rather than state that underlies the "London vs Detroit"
> TDD styles.
>
> I also suspect that some will express the same algorithm twice, negating the
> usefulness of the test. That's why I'd like to see your examples, rather
> than just a description.

I'm not Nat, but here are some examples I've collected over the years:

http://gojko.net/2010/04/13/how-to-implement-ui-testing-without-shooting-yourself-in-the-foot-2/
http://concordion.org/Technique.html
http://concordion.org/ScriptingMakeover.html

- Kim

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Nat Pryce
2014-08-14 15:45:18 UTC
Permalink
I see a lot of G/W/T tests like this (made up) example:

Scenario: subscriber access to HD and SD On Demand movies

Given the user has a PREMIUM subscription
And a movie offered in both HD and SD
And the user has not set a format preference
When the user purchases the movie
Then they are offered choice of HD and SD format

Given the user has a PREMIUM subscription
And a movie offered in both HD and SD
And the user has set their format preference to SD
When the user purchases the movie
Then they are not offered choice of format
And they receive the movie in SD format

Given the user has a PREMIUM subscription
And a movie offered in both HD and SD
And the user has set their format preference to HD
When the user purchases the movie
Then they are not offered choice of format
And they receive the movie in HD format

Given the user has a PREMIUM subscription
And a movie offered in SD only
When the user purchases the movie
Then they are not offered choice of format
And they receive the movie in SD format

Given the user has a STANDARD subscription
And a movie offered in both HD and SD
When the user purchases the movie
Then they are not offered choice of format
And they receive the movie in SD format

... etc. etc. etc.

Lots of duplication, lots of irrelevant detail. It's hard to spot the
ambiguous/missing test cases. And this is not as bad as I usually see it.
Imagine that "When the user purchase the movie" is actually: "the user
navigates to On Demand movies, And selects the movie, And presses the
purchase button." and "they receive the movie in XX format" is actually
"they navigate to the downloads screen. And select the movie. And the
format icon is XX".

I prefer to write the rules of what format gets offered when as a relation
or function. (I hope the formatting comes out).

User Subscription | Format Preference | Movie Format | Format Downloaded
------------------+-------------------+--------------+-------------------
PREMIUM | None | HD available | User's choice
PREMIUM | HD | HD available | HD
PREMIUM | SD | HD available | SD
PREMIUM | Any | SD only | SD
------------------+-------------------+--------------+-------------------
STANDARD | n/a | HD available | SD
STANDARD | n/a | SD | SD

That can be more easily targeted at an object in a unit test or used to
drive the GUI in a system test, or interact with presenter code that sits
behind the UI, or a service.

I don't think this has anything to do with London vs Detroit TDD. I've
been corrupted by using Fit!

--Nat




On 14 August 2014 16:03, George Dinwiddie <lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org> wrote:

> Nat,
>
>
> On 8/14/14 10:53 AM, Nat Pryce wrote:
>
>> I often see that teams start with one approach -- testing through the
>> GUI, for example -- and then bake assumptions into their acceptance
>> tests that stop them being able to validate the acceptance criteria
>> against isolated code. For example, they may write an acceptance test
>> for rules in terms of logging in and stepping through a process that the
>> GUI currently imposes, when actually what should be being tested are
>> rules that, in the full system, control or constrain the data and
>> actions made available to the user by the GUI.
>>
>>
>> Using the cucmber/BDD lingo, I argue "When considered harmful". If you
>> can write your acceptance tests in terms of functions or constraints
>> then you can more easily validate them against different scales of the
>> system.
>>
>
> Could you provide examples of acceptance tests expressed in terms of
> functions and constraints? I suspect that this is the same orientation
> toward interaction rather than state that underlies the "London vs Detroit"
> TDD styles.
>
> I also suspect that some will express the same algorithm twice, negating
> the usefulness of the test. That's why I'd like to see your examples,
> rather than just a description.
>
> - George
>
>
>> --Nat
>>
>>
>> On 14 August 2014 08:40, Steve Freeeman <steve-/6t1Ty3A97Vaa/***@public.gmane.org
>> <mailto:steve-/6t1Ty3A97Vaa/***@public.gmane.org>> wrote:
>>
>> I don't have an answer. It might be too hard, but then writing clean
>> code is quite hard too so many people don't :)
>>
>> S
>>
>> On 14 Aug 2014, at 08:15, Malte Finsterwalder
>> <malte-***@public.gmane.org <mailto:malte-***@public.gmane.org>> wrote:
>> > If not a lot of people do it, that may be a hint.
>> > Why aren't more people using this approach?
>> > Is it too much work with too little payoff?
>> > Is it just too complicated?
>> >
>> > Greetings,
>> > Malte
>> >
>> > On 11.08.2014 15:50, Steve Freeman wrote:
>> >> The idea has been around for ages although not many people
>> actually do it.
>> >>
>> >> S.
>> >>
>> >> Steve Freeman
>> >> http://www.higherorderlogic.com
>> >>
>> >> Written on a phone, so please allow for typos and short content.
>> >>
>> >> On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos
>> <josuesantos-***@public.gmane.org <mailto:josuesantos-***@public.gmane.org>> wrote:
>> >>
>> >>>>> I saw a video of Seb Rose's talk at Cuke Up about how he's
>> trying an experiment to build acceptance tests (in Cucumber) that
>> can be >>run either end-to-end with a browser or using lower-level
>> objects by changing an environment variable.
>> >>>
>> >>> I my job we do exactly this. And it is not exactly a new idea.
>> I think I read about this strategy in Lisa Crispin Book: Testing
>> Extreme Programming (2002).
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to
>> growing-object-oriented-software+unsubscribe-/***@public.gmane.org
>> <mailto:growing-object-oriented-software%
>> 2Bunsubscribe-/***@public.gmane.org>.
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>>
>> --
>> http://www.natpryce.com
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to
>> growing-object-oriented-software+unsubscribe-/***@public.gmane.org
>> <mailto:growing-object-oriented-software+unsubscribe-/***@public.gmane.org>.
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> ----------------------------------------------------------------------
> * George Dinwiddie * http://blog.gdinwiddie.com
> Software Development http://www.idiacomputing.com
> Consultant and Coach http://www.agilemaryland.org
> ----------------------------------------------------------------------
>
>
> --
>
> --- You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>



--
http://www.natpryce.com

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Steve Freeeman
2014-08-14 16:35:38 UTC
Permalink
+1. I have a videoed rant out there somewhere called "Given/When/Then Considered Harmful" along these lines.

I gave it at Skillsmatter and had a couple of BA's nodding vigorously in the front row.

Once again, we should have listened to Ward.

S

On 14 Aug 2014, at 16:45, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:
> Lots of duplication, lots of irrelevant detail. It's hard to spot the ambiguous/missing test cases. And this is not as bad as I usually see it. Imagine that "When the user purchase the movie" is actually: "the user navigates to On Demand movies, And selects the movie, And presses the purchase button." and "they receive the movie in XX format" is actually "they navigate to the downloads screen. And select the movie. And the format icon is XX".
>
> I prefer to write the rules of what format gets offered when as a relation or function. (I hope the formatting comes out).
>
> User Subscription | Format Preference | Movie Format | Format Downloaded
> ------------------+-------------------+--------------+-------------------
> PREMIUM | None | HD available | User's choice
> PREMIUM | HD | HD available | HD
> PREMIUM | SD | HD available | SD
> PREMIUM | Any | SD only | SD
> ------------------+-------------------+--------------+-------------------
> STANDARD | n/a | HD available | SD
> STANDARD | n/a | SD | SD
>
> That can be more easily targeted at an object in a unit test or used to drive the GUI in a system test, or interact with presenter code that sits behind the UI, or a service.
>
> I don't think this has anything to do with London vs Detroit TDD. I've been corrupted by using Fit!

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Kim Gräsman
2014-08-14 18:44:39 UTC
Permalink
On Thu, Aug 14, 2014 at 5:45 PM, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:
>
> I prefer to write the rules of what format gets offered when as a relation
> or function. (I hope the formatting comes out).
>
> User Subscription | Format Preference | Movie Format | Format Downloaded
> ------------------+-------------------+--------------+-------------------
> PREMIUM | None | HD available | User's choice
> PREMIUM | HD | HD available | HD
> PREMIUM | SD | HD available | SD
> PREMIUM | Any | SD only | SD
> ------------------+-------------------+--------------+-------------------
> STANDARD | n/a | HD available | SD
> STANDARD | n/a | SD | SD
>
> That can be more easily targeted at an object in a unit test or used to
> drive the GUI in a system test, or interact with presenter code that sits
> behind the UI, or a service.
>
> I don't think this has anything to do with London vs Detroit TDD. I've been
> corrupted by using Fit!

Oh look, a ColumnFixture! :-)

- Kim

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Nat Pryce
2014-08-14 19:09:19 UTC
Permalink
Or is it a RowFixture. I could never remember -- tables always have
columns and rows! I found ironic that the names in Fit were so confusing
given that it was a tool for creating a system of names.

--Nat


On 14 August 2014 19:44, Kim GrÀsman <kim.grasman-***@public.gmane.org> wrote:

> On Thu, Aug 14, 2014 at 5:45 PM, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:
> >
> > I prefer to write the rules of what format gets offered when as a
> relation
> > or function. (I hope the formatting comes out).
> >
> > User Subscription | Format Preference | Movie Format | Format Downloaded
> > ------------------+-------------------+--------------+-------------------
> > PREMIUM | None | HD available | User's choice
> > PREMIUM | HD | HD available | HD
> > PREMIUM | SD | HD available | SD
> > PREMIUM | Any | SD only | SD
> > ------------------+-------------------+--------------+-------------------
> > STANDARD | n/a | HD available | SD
> > STANDARD | n/a | SD | SD
> >
> > That can be more easily targeted at an object in a unit test or used to
> > drive the GUI in a system test, or interact with presenter code that sits
> > behind the UI, or a service.
> >
> > I don't think this has anything to do with London vs Detroit TDD. I've
> been
> > corrupted by using Fit!
>
> Oh look, a ColumnFixture! :-)
>
> - Kim
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>



--
http://www.natpryce.com

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Rick Mugridge
2014-08-14 20:42:53 UTC
Permalink
Cucumber has tables too, see: http://cukes.info/step-definitions.html



On Fri, Aug 15, 2014 at 7:09 AM, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:

> Or is it a RowFixture. I could never remember -- tables always have
> columns and rows! I found ironic that the names in Fit were so confusing
> given that it was a tool for creating a system of names.
>
> --Nat
>
>
> On 14 August 2014 19:44, Kim GrÀsman <kim.grasman-***@public.gmane.org> wrote:
>
>> On Thu, Aug 14, 2014 at 5:45 PM, Nat Pryce <nat.pryce-***@public.gmane.org> wrote:
>> >
>> > I prefer to write the rules of what format gets offered when as a
>> relation
>> > or function. (I hope the formatting comes out).
>> >
>> > User Subscription | Format Preference | Movie Format | Format Downloaded
>> >
>> ------------------+-------------------+--------------+-------------------
>> > PREMIUM | None | HD available | User's choice
>> > PREMIUM | HD | HD available | HD
>> > PREMIUM | SD | HD available | SD
>> > PREMIUM | Any | SD only | SD
>> >
>> ------------------+-------------------+--------------+-------------------
>> > STANDARD | n/a | HD available | SD
>> > STANDARD | n/a | SD | SD
>> >
>> > That can be more easily targeted at an object in a unit test or used to
>> > drive the GUI in a system test, or interact with presenter code that
>> sits
>> > behind the UI, or a service.
>> >
>> > I don't think this has anything to do with London vs Detroit TDD. I've
>> been
>> > corrupted by using Fit!
>>
>> Oh look, a ColumnFixture! :-)
>>
>> - Kim
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Growing Object-Oriented Software" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> http://www.natpryce.com
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Steve Freeeman
2014-08-14 21:24:08 UTC
Permalink
true, but they aren't as flexible and the error reporting isn't as good.

S

On 14 Aug 2014, at 21:42, Rick Mugridge <rick.mugridge-***@public.gmane.org> wrote:
> Cucumber has tables too, see: http://cukes.info/step-definitions.html

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
George Dinwiddie
2014-08-14 20:43:57 UTC
Permalink
Hi, Nat,

On 8/14/14 11:45 AM, Nat Pryce wrote:
> I see a lot of G/W/T tests like this (made up) example:
>
> Scenario: subscriber access to HD and SD On Demand movies
>
> Given the user has a PREMIUM subscription
> And a movie offered in both HD and SD
> And the user has not set a format preference
> When the user purchases the movie
> Then they are offered choice of HD and SD format
>
> Given the user has a PREMIUM subscription
> And a movie offered in both HD and SD
> And the user has set their format preference to SD
> When the user purchases the movie
> Then they are not offered choice of format
> And they receive the movie in SD format
>
> Given the user has a PREMIUM subscription
> And a movie offered in both HD and SD
> And the user has set their format preference to HD
> When the user purchases the movie
> Then they are not offered choice of format
> And they receive the movie in HD format
>
> Given the user has a PREMIUM subscription
> And a movie offered in SD only
> When the user purchases the movie
> Then they are not offered choice of format
> And they receive the movie in SD format
>
> Given the user has a STANDARD subscription
> And a movie offered in both HD and SD
> When the user purchases the movie
> Then they are not offered choice of format
> And they receive the movie in SD format
>
> ... etc. etc. etc.
>
> Lots of duplication, lots of irrelevant detail. It's hard to spot the
> ambiguous/missing test cases. And this is not as bad as I usually see
> it. Imagine that "When the user purchase the movie" is actually: "the
> user navigates to On Demand movies, And selects the movie, And presses
> the purchase button." and "they receive the movie in XX format" is
> actually "they navigate to the downloads screen. And select the movie.
> And the format icon is XX".

Oh, yes, I've seen a lot of bad GWT scenarios. I've seen badly written
acceptance tests no matter what the format.

>
> I prefer to write the rules of what format gets offered when as a
> relation or function. (I hope the formatting comes out).
>
> User Subscription | Format Preference | Movie Format | Format Downloaded
> ------------------+-------------------+--------------+-------------------
> PREMIUM | None | HD available | User's choice
> PREMIUM | HD | HD available | HD
> PREMIUM | SD | HD available | SD
> PREMIUM | Any | SD only | SD
> ------------------+-------------------+--------------+-------------------
> STANDARD | n/a | HD available | SD
> STANDARD | n/a | SD | SD

To me, this looks like a GWT Scenario Outline with the GWT part being
implicit. Personally, I prefer my specs to be a bit more explicit.
Perhaps something like:

Given a <User Subscription> user has specified <Format Preference>
preference
When the user requests a movie available in <Movie Format> format
Then the <Format Downloaded> format is downloaded.

Granted, that "When the user requests a movie" is rather weak in
expression, but I'm not having a discussion with the product owner and
can't ask clarifying questions. I would like *something* that lets me
know *when* something should be true. Certainly movies aren't being
downloaded without the user's intervention, are they?
>
> That can be more easily targeted at an object in a unit test or used to
> drive the GUI in a system test, or interact with presenter code that
> sits behind the UI, or a service.

If not expressed as user-interaction steps, then GWT can easily drive
any level of the implementation.

>
> I don't think this has anything to do with London vs Detroit TDD. I've
> been corrupted by using Fit!

Yes, I thought you meant something else by "functions and constraints."
Tests driven by tabular data are excellent for some things, and not so
good for others. And I've seen tests driven by tabular data that's
completely irrelevant to the point of the test, so that's insufficient
to produce good acceptance tests.

BTW, I don't see "HD only" as a "Movie Format" choice. Couldn't that be
possible? If so, wouldn't the "HD and SD" description in your earlier
scenarios be better than the "HD available" in the table? Strengthening
your argument for table-driven examples, the lack of "HD only" is much
more obvious in table version.

- George

--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Josue Barbosa dos Santos
2014-08-15 00:51:28 UTC
Permalink
JBehave:

Scenario:subscriber access to HD and SD On Demand movies

Given the user has a <User Subscription>
And a movie offered in <Movie Format>
And the user has set <Format Preference>
When the user purchases the movie
Then they receive the movie in <Format Downloaded>

Examples:
User Subscription | Format Preference | Movie Format | Format Downloaded
PREMIUM | None | HD available | User's
choice
PREMIUM | HD | HD available | HD
PREMIUM | SD | HD available | SD
PREMIUM | Any | SD only | SD
STANDARD | n/a | HD available | SD
STANDARD | n/a | SD | SD


On Thu, Aug 14, 2014 at 5:43 PM, George Dinwiddie <lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org>
wrote:

> Hi, Nat,
>
>
> On 8/14/14 11:45 AM, Nat Pryce wrote:
>
>> I see a lot of G/W/T tests like this (made up) example:
>>
>> Scenario: subscriber access to HD and SD On Demand movies
>>
>> Given the user has a PREMIUM subscription
>> And a movie offered in both HD and SD
>> And the user has not set a format preference
>> When the user purchases the movie
>> Then they are offered choice of HD and SD format
>>
>> Given the user has a PREMIUM subscription
>> And a movie offered in both HD and SD
>> And the user has set their format preference to SD
>> When the user purchases the movie
>> Then they are not offered choice of format
>> And they receive the movie in SD format
>>
>> Given the user has a PREMIUM subscription
>> And a movie offered in both HD and SD
>> And the user has set their format preference to HD
>> When the user purchases the movie
>> Then they are not offered choice of format
>> And they receive the movie in HD format
>>
>> Given the user has a PREMIUM subscription
>> And a movie offered in SD only
>> When the user purchases the movie
>> Then they are not offered choice of format
>> And they receive the movie in SD format
>>
>> Given the user has a STANDARD subscription
>> And a movie offered in both HD and SD
>> When the user purchases the movie
>> Then they are not offered choice of format
>> And they receive the movie in SD format
>>
>> ... etc. etc. etc.
>>
>> Lots of duplication, lots of irrelevant detail. It's hard to spot the
>> ambiguous/missing test cases. And this is not as bad as I usually see
>> it. Imagine that "When the user purchase the movie" is actually: "the
>> user navigates to On Demand movies, And selects the movie, And presses
>> the purchase button." and "they receive the movie in XX format" is
>> actually "they navigate to the downloads screen. And select the movie.
>> And the format icon is XX".
>>
>
> Oh, yes, I've seen a lot of bad GWT scenarios. I've seen badly written
> acceptance tests no matter what the format.
>
>
>
>> I prefer to write the rules of what format gets offered when as a
>> relation or function. (I hope the formatting comes out).
>>
>> User Subscription | Format Preference | Movie Format | Format Downloaded
>> ------------------+-------------------+--------------+-------------------
>> PREMIUM | None | HD available | User's choice
>> PREMIUM | HD | HD available | HD
>> PREMIUM | SD | HD available | SD
>> PREMIUM | Any | SD only | SD
>> ------------------+-------------------+--------------+-------------------
>> STANDARD | n/a | HD available | SD
>> STANDARD | n/a | SD | SD
>>
>
> To me, this looks like a GWT Scenario Outline with the GWT part being
> implicit. Personally, I prefer my specs to be a bit more explicit. Perhaps
> something like:
>
> Given a <User Subscription> user has specified <Format Preference>
> preference
> When the user requests a movie available in <Movie Format> format
> Then the <Format Downloaded> format is downloaded.
>
> Granted, that "When the user requests a movie" is rather weak in
> expression, but I'm not having a discussion with the product owner and
> can't ask clarifying questions. I would like *something* that lets me know
> *when* something should be true. Certainly movies aren't being downloaded
> without the user's intervention, are they?
>
>
>> That can be more easily targeted at an object in a unit test or used to
>> drive the GUI in a system test, or interact with presenter code that
>> sits behind the UI, or a service.
>>
>
> If not expressed as user-interaction steps, then GWT can easily drive any
> level of the implementation.
>
>
>
>> I don't think this has anything to do with London vs Detroit TDD. I've
>> been corrupted by using Fit!
>>
>
> Yes, I thought you meant something else by "functions and constraints."
> Tests driven by tabular data are excellent for some things, and not so good
> for others. And I've seen tests driven by tabular data that's completely
> irrelevant to the point of the test, so that's insufficient to produce good
> acceptance tests.
>
> BTW, I don't see "HD only" as a "Movie Format" choice. Couldn't that be
> possible? If so, wouldn't the "HD and SD" description in your earlier
> scenarios be better than the "HD available" in the table? Strengthening
> your argument for table-driven examples, the lack of "HD only" is much more
> obvious in table version.
>
> - George
>
>
> --
> ----------------------------------------------------------------------
> * George Dinwiddie * http://blog.gdinwiddie.com
> Software Development http://www.idiacomputing.com
> Consultant and Coach http://www.agilemaryland.org
> ----------------------------------------------------------------------
>
> --
>
> --- You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>



--
Abraços,
Josué
http://twitter.com/josuesantos

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
George Dinwiddie
2014-08-15 03:49:27 UTC
Permalink
Josue,

On 8/14/14 8:51 PM, Josue Barbosa dos Santos wrote:
> JBehave:
>

What do you prefer about

> Scenario:subscriber access to HD and SD On Demand movies
>
> Given the user has a <User Subscription>
> And a movie offered in <Movie Format>
> And the user has set <Format Preference>
> When the user purchases the movie
> Then they receive the movie in <Format Downloaded>

over

> Given a <User Subscription> user has specified <FormatPreference> preference
> When the user requests a movie available in <Movie Format> format
> Then the <Format Downloaded> format is downloaded.

? The differences I see is that it's wordier, and specifies purchasing
(which wasn't apparent to me in the original formulation) over some
other transaction. What am I missing?

- George

P.S. The Gherkin is the same with JBehave and Cucumber.

--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Josue Barbosa dos Santos
2014-08-15 17:08:44 UTC
Permalink
On Fri, Aug 15, 2014 at 12:49 AM, George Dinwiddie <lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org>
wrote:

> Josue,
>
> On 8/14/14 8:51 PM, Josue Barbosa dos Santos wrote:
>
>> JBehave:
>>
>>
> What do you prefer about...



The point was not your construction but that JBehave is other example of
tool that we can mix Gherkin and tabular data.

Sorry, I was not very clear.

--
Abraços,
Josué
http://twitter.com/josuesantos


On Fri, Aug 15, 2014 at 12:49 AM, George Dinwiddie <lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org>
wrote:

> Josue,
>
> On 8/14/14 8:51 PM, Josue Barbosa dos Santos wrote:
>
>> JBehave:
>>
>>
> What do you prefer about
>
>
> Scenario:subscriber access to HD and SD On Demand movies
>>
>> Given the user has a <User Subscription>
>> And a movie offered in <Movie Format>
>> And the user has set <Format Preference>
>> When the user purchases the movie
>> Then they receive the movie in <Format Downloaded>
>>
>
> over
>
> Given a <User Subscription> user has specified <FormatPreference>
>> preference
>>
>> When the user requests a movie available in <Movie Format> format
>> Then the <Format Downloaded> format is downloaded.
>>
>
> ? The differences I see is that it's wordier, and specifies purchasing
> (which wasn't apparent to me in the original formulation) over some other
> transaction. What am I missing?
>
> - George
>
> P.S. The Gherkin is the same with JBehave and Cucumber.
>
>
> --
> ----------------------------------------------------------------------
> * George Dinwiddie * http://blog.gdinwiddie.com
> Software Development http://www.idiacomputing.com
> Consultant and Coach http://www.agilemaryland.org
> ----------------------------------------------------------------------
>
> --
>
> --- You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
George Dinwiddie
2014-08-15 18:12:58 UTC
Permalink
Josue,

On 8/15/14 1:08 PM, Josue Barbosa dos Santos wrote:
> On Fri, Aug 15, 2014 at 12:49 AM, George Dinwiddie
> <lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org <mailto:lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org>> wrote:
>
> Josue,
>
> On 8/14/14 8:51 PM, Josue Barbosa dos Santos wrote:
>
> JBehave:
>
>
> What do you prefer about...
>
>
>
> The point was not your construction but that JBehave is other example of
> tool that we can mix Gherkin and tabular data.

Oh, there are many of those. The tables are part of the Gherkin syntax.

- George

--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Thiago - BOL
2014-08-18 04:50:38 UTC
Permalink
I think combinatorial testing, such as pair-wise testing (http://www.pairwise.org), can help choosing the values for testing scenarios like this.



[]s
- Thiago


PS: By the way, this is one thing I plan to address on a future version of FunTester (http://funtester.org). 





________________________________
De: Josue Barbosa dos Santos <josuesantos-***@public.gmane.org>
Para: growing-object-oriented-software-/***@public.gmane.org
Enviadas: Quinta-feira, 14 de Agosto de 2014 21:51
Assunto: Re: [GOOS] The Life of Acceptance Tests Was: How and Where to start a software develop?



JBehave:

Scenario:subscriber access to HD and SD On Demand movies

Given the user has a <User Subscription>
  And a movie offered in <Movie Format>
  And the user has set <Format Preference>
When the user purchases the movie
Then they receive the movie in <Format Downloaded>

Examples:
User Subscription | Format Preference | Movie Format | Format Downloaded
PREMIUM           | None                     | HD available   | User's choice
PREMIUM           | HD                        | HD available   | HD
PREMIUM           | SD                        | HD available   | SD
PREMIUM           | Any                       | SD only         | SD
STANDARD         |
n/a                        | HD available   | SD
STANDARD         | n/a                        | SD                 | SD




On Thu, Aug 14, 2014 at 5:43 PM, George Dinwiddie <lists-HLNzD44B1AikJOqCEYON2AC/***@public.gmane.org> wrote:

Hi, Nat,
>
>
>On 8/14/14 11:45 AM, Nat Pryce wrote:
>
>I see a lot of G/W/T tests like this (made up) example:
>>
>>Scenario: subscriber access to HD and SD On Demand movies
>>
>>Given the user has a PREMIUM subscription
>>   And a movie offered in both HD and SD
>>   And the user has not set a format preference
>>When the user purchases the movie
>>Then they are offered choice of HD and SD format
>>
>>Given the user has a PREMIUM subscription
>>   And a movie offered in both HD and SD
>>   And the user has set their format preference to SD
>>When the user purchases the movie
>>Then they are not offered choice of format
>>  And they receive the movie in SD format
>>
>>Given the user has a PREMIUM subscription
>>   And a movie offered in both HD and SD
>>   And the user has set their format preference to HD
>>When the user purchases the movie
>>Then they are not offered choice of format
>>  And they receive the movie in HD format
>>
>>Given the user has a PREMIUM subscription
>>   And a movie offered in SD only
>>When the user purchases the movie
>>Then they are not offered choice of format
>>  And they receive the movie in SD format
>>
>>Given the user has a STANDARD subscription
>>   And a movie offered in both HD and SD
>>When the user purchases the movie
>>Then they are not offered choice of format
>>  And they receive the movie in SD format
>>
>>... etc. etc. etc.
>>
>>Lots of duplication, lots of irrelevant detail. It's hard to spot the
>>ambiguous/missing test cases. And this is not as bad as I usually see
>>it.  Imagine that "When the user purchase the movie" is actually: "the
>>user navigates to On Demand movies, And selects the movie, And presses
>>the purchase button." and "they receive the movie in XX format" is
>>actually "they navigate to the downloads screen. And select the movie.
>>And the format icon is XX".
>>
>
Oh, yes, I've seen a lot of bad GWT scenarios. I've seen badly written acceptance tests no matter what the format.
>
>
>
>
>>I prefer to write the rules of what format gets offered when as a
>>relation or function. (I hope the formatting comes out).
>>
>>User Subscription | Format Preference | Movie Format | Format Downloaded
>>------------------+-------------------+--------------+-------------------
>>PREMIUM           | None              | HD available | User's choice
>>PREMIUM           | HD                | HD available | HD
>>PREMIUM           | SD                | HD available | SD
>>PREMIUM           | Any               | SD only      | SD
>>------------------+-------------------+--------------+-------------------
>>STANDARD          | n/a               | HD available | SD
>>STANDARD          | n/a               | SD           | SD
>>
>
To me, this looks like a GWT Scenario Outline with the GWT part being implicit. Personally, I prefer my specs to be a bit more explicit. Perhaps something like:
>
>  Given a <User Subscription> user has specified <Format Preference> preference
>  When the user requests a movie available in <Movie Format> format
>  Then the <Format Downloaded> format is downloaded.
>
>Granted, that "When the user requests a movie" is rather weak in expression, but I'm not having a discussion with the product owner and can't ask clarifying questions. I would like *something* that lets me know *when* something should be true. Certainly movies aren't being downloaded without the user's intervention, are they?
>
>
>
>>That can be more easily targeted at an object in a unit test or used to
>>drive the GUI in a system test, or interact with presenter code that
>>sits behind the UI, or a service.
>>
>
If not expressed as user-interaction steps, then GWT can easily drive any level of the implementation.
>
>
>
>
>>I don't think this has anything to do with London vs Detroit TDD.  I've
>>been corrupted by using Fit!
>>
>
Yes, I thought you meant something else by "functions and constraints." Tests driven by tabular data are excellent for some things, and not so good for others. And I've seen tests driven by tabular data that's completely irrelevant to the point of the test, so that's insufficient to produce good acceptance tests.
>
>BTW, I don't see "HD only" as a "Movie Format" choice. Couldn't that be possible? If so, wouldn't the "HD and SD" description in your earlier scenarios be better than the "HD available" in the table? Strengthening your argument for table-driven examples, the lack of "HD only" is much more obvious in table version.
>
> - George
>
>
>--
> ----------------------------------------------------------------------
>  * George Dinwiddie *                      http://blog.gdinwiddie.com
>  Software Development                    http://www.idiacomputing.com
>  Consultant and Coach                    http://www.agilemaryland.org
> ----------------------------------------------------------------------
>
>--
>
>--- You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
>To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
>For more options, visit https://groups.google.com/d/optout.
>


--
Abraços,
Josué
http://twitter.com/josuesantos
--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Josue Barbosa dos Santos
2014-08-14 14:06:15 UTC
Permalink
On Thu, Aug 14, 2014 at 4:15 AM, Malte Finsterwalder <
malte-***@public.gmane.org> wrote:
>>If not a lot of people do it, that may be a hint.
>>Why aren't more people using this approach?
I donÂŽt know .
>>Is it too much work with too little payoff?
I donÂŽt think so.
>>Is it just too complicated?
I donÂŽt think so.

I will try to justify my answers (sorry my English)

Basically, the only thing you have to do to work in this way is to create
an abstraction to the business steps of the test. For example, suppose a
transfer service (pseudo-java).

//it is just an example.Probably i would write this test in a different
way.
public void shouldTransferWithSuccessWhenHaveEnoughFunds(){

Application application = getApplication();

application.login("josue", "josuepassword");

//suppose we already have the accounts with balance (funds?) of 100
application.tranfer(idOriginAccount, idDestinationAccount, 50);

assertThat(getAccount(idOriginAccount)).equalsTo(50);

assertThat(getAccount(idOriginAccount)).equalsTo(150);

}

We use an java interface to represent Application and create two
implementations of it. One that drives de UI (generally a web interface)
and one that call the code behind the UI. Ex:

//drives the web interface
class ApplicationWeb implements Application{
...
public void tranfer(Integer idOriginAccount, Integer
idDestinationAccount, Value value){
browser.gotoToPage("transfer page");

browser.findElement("originAccount").setValue(idOriginAccount);


browser.findElement("destinationAccount").setValue(idDestinationAccount);

browser.findElement("transferValue").setValue(value);

browser.clickElement("transferButton");

//the code above is using the driver directly. In reality we would
abstract
//more the browser so the code would be more or less:
//browser.setIdOriginAcount(idOriginAccount);
//browser.setIdDestinationAcount(idDestinationAccount);
//browser.setTransferValue(value);
//browser.transfer();

}
...
}

//calls the code behind UI
class ApplicationDirectCall implements Application{
...
public void tranfer(Integer idOriginAccount, Integer
idDestinationAccount, Value value){

TransferService transferService = getTransferService();

transferService.transfer(idOriginAccount, idDestinationAccount,
value);

}
...
}


I donÂŽt think the above is too much work with or is too complicated. And
why we use both tests:

Direct Calls Tests:

Fast Feedback

The direct call test executes much faster. In one example here all
acceptance tests using direct calls executes in 10 min. The web tests
executes in 2 hours. One could say why you donÂŽt try to speed up the web
tests. In my reality, in the moment, there is not enough resources to the
production environment imagine to a testing environment. We have to fight
for disk space, imagine to think in an strategy, for example, of
parallelize the web tests execution. They promised that until the end of
year this will be resolved. I am not much sure about that.

But even if we have resources to try others strategies to sped up the web
tests, we could apply the same strategy to the direct calls tests. So the
feedback from the direct calls tests would be still faster. For example, if
we reduce the time of web tests from 2h to 30 min, probably we could reduce
the time of direct calls tests to 2.5 min.

Ensuring a standard

My context: we are a public service enterprise (in Brazil) and we can not
contract a company directly nor we can hire directly an employee. The only
legal way to do this is through a public auction (bid? I am not sure how to
say in English) where firms compete. The definitive contract is for a
period of 5 years but for some bureaucratic problem we are working with
temporary contracts of 6 months in the last two years. So, usually at
every six months change the company we work with. And generally the
employees of that companies has no good skills. In the contract we list
everything that we want and will charge: automatic web tests; automatic
unit tests; ATDD; TDD; BDD; OO; Clen Code... . Fines are in the agreement
but some companies donÂŽt believe in it and think that it is dead letter.
But it isnÂŽt and they suffer. End my context.

Thus, everything we can do to ensure a standard we do. And direct calls
test forces that business logic not be in the web tier. Once a programmer
came and complained how he would test if the code was in the web tier. And
I suggest: "what about extract this business logic to another class? By the
way, this sql code also not belongs to here." Sad, but it is true :( . SIDE
NOTE. The current company is good (may be very good) with the quality code
practices. The problem is the small team.

More robust

The direct calls tests are more robust. If it breaks the probability of a
real problem is near 100%. Web tests are less robust. It become much better
when we migrate from Selenium to Webdriver but once a while there is still
a false positive.

Web tests:

Tests everything works together

The only test that verifies if the application as a whole works is the web
test. The web test is the one that gives you the bigger confidence that
everything that was working is still working. In fact there are many others
advantages in the web tests. For coincidence I am working right now in a
document explaining why web tests are important. I am doing this because
the contracted company is proposing to work only with the direct call
tests. When I finish it may be I post here. In some sections of the
document I quote GOOS. And many others references of course.

And that is it. I donÂŽt think this strategy of working with acceptance test
wit the UI and behind UI is too much work or complicated or has little
payoff. Your mileage may vary of course.

--
Abraços,
Josué
http://twitter.com/josuesantos







> Greetings,
> Malte
>
> On 11.08.2014 15:50, Steve Freeman wrote:
> > The idea has been around for ages although not many people actually do
> it.
> >
> > S.
> >
> > Steve Freeman
> > http://www.higherorderlogic.com
> >
> > Written on a phone, so please allow for typos and short content.
> >
> > On 11 Aug 2014, at 12:18, Josue Barbosa dos Santos <
> josuesantos-***@public.gmane.org> wrote:
> >
> >>>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
> experiment to build acceptance tests (in Cucumber) that can be >>run either
> end-to-end with a browser or using lower-level objects by changing an
> environment variable.
> >>
> >> I my job we do exactly this. And it is not exactly a new idea. I think
> I read about this strategy in Lisa Crispin Book: Testing Extreme
> Programming (2002).
> >>
> >> --
> >> Abraços,
> >> Josué
> >> http://twitter.com/josuesantos
> >>
> >> On Sun, Aug 10, 2014 at 8:38 PM, Daniel Wellman <etldan-***@public.gmane.org>
> wrote:
> >>> This was a thread from a few years ago which turned into a discussion
> of acceptance tests, end-to-end tests, system tests, and where people might
> test what. I thought there were a lot of helpful (to me, at least) stories
> of how people migrate acceptance tests from end-to-end to directly testing
> the domain objects, and migrating end-to-end acceptance tests into a suite
> of system tests.
> >>>
> >>> Now, a few years later, I'm wondering if anyone has any new insights
> or things they've tried that is working (or not) for them?
> >>>
> >>> I saw a video of Seb Rose's talk at Cuke Up about how he's trying an
> experiment to build acceptance tests (in Cucumber) that can be run either
> end-to-end with a browser or using lower-level objects by changing an
> environment variable. Here's a link to the talk:
> https://skillsmatter.com/skillscasts/3957-dont-you-trust-me-a-technical-approach-to-building-trust-and-consensus
> >>>
> >>>
> >>> One thing I've learned is that when builds get long enough, it's now
> easy enough to set up the build to be run in parallel. I've seen services
> that do this, and other implementations done by hand to get the same
> effect. My take is that this can get you some substantial gains quickly in
> terms of cutting total build time, but these end-to-end tests often (but
> not always) tend to flicker the most (as compared to unit tests or some
> integration tests). Thinking more about what these tests are actually
> covering and what level of test is appropriate and rewriting or refactoring
> seems to be working well from what I've tried so far.
> >>>
> >>> Anyone else?
> >>>
> >>> Cheers,
> >>> Dan
> >>>
> >>>
> >>> On Monday, June 25, 2012 5:23:51 PM UTC-4, Ben Biddington wrote:
> >>>>
> >>>>> What would the acceptance tests look like if they aren't end-to-end?
> >>>>
> >>>> I have been considering this also. I often fall into the trap of
> having controller-level abstractions influence design, so I wondered about
> writing the UI adapter last.
> >>>>
> >>>> Perhaps these are eventual-end-to-end tests.
> >>>>
> >>>> <bb />
> >>>>
> >>>> On Sunday, 24 June 2012 05:32:04 UTC+12, Daniel Wellman wrote:
> >>>>>
> >>>>> Steve and J.B., thank you - your insights just gave me an "A-ha!"
> moment about acceptance tests vs. integration tests. Does what I describe
> below align with your philosophies, or have I missed something?
> >>>>>
> >>>>> I kept thinking that the style described in the book was to prefer
> acceptance tests to *always* be end-to-end tests. The book says this (p.
> 10): "Our preferred implementation of the "role" of acceptance testing is
> to write end-to-end tests which, as we just noted, should be as end-to-end
> as possible, our bias often leads us to use these terms interchangeably
> although, in some cases, acceptance tests might not be end-to-end."
> >>>>>
> >>>>> Then I saw Steve say this:
> >>>>>> The trick is not to do too much through the top-level tests,
> otherwise they grow out of control. In your case, probably there would be a
> smaller number of top-level tests that show the main paths through the
> system
> >>>>>
> >>>>> Which made me start thinking that there was likely a point in the
> project's lifespan where the cost of running or writing those acceptance
> tests as end-to-end tests became too high. Perhaps it's the length of the
> tests, perhaps the team has grown confident enough with their progress that
> they know when they need an end-to-end test and when they don't.
> >>>>>
> >>>>> Then I saw J.B. say this, which gives me some confirmation that
> others are also eventually switching the default style of acceptance tests
> from end-to-end to something ... smaller.
> >>>>>
> >>>>>> I slowly move end to end tests into a system test suite, then
> replace them with isolated tests to check each layer more carefully.
> >>>>>> I principally avoid exhaustive end to end tests to show basic
> correctness; I save them for system level problems, like response time or
> scaling.
> >>>>>
> >>>>>
> >>>>> So then I wondered, "What would the acceptance tests look like if
> they aren't end-to-end?" One idea I had was, "Use the ports and adapters
> architecture and swap out some of the production adapters with
> test-specific adapters." For example, replace the production database with
> versions that store their data in-memory.
> >>>>>
> >>>>> I've spent the last eight years primarily working on web
> applications, so when I hear as "end-to-end" as possible, I think of
> exercising the browser using a tool like Selenium / WebDriver. Writing a
> new acceptance test for every new feature in a long-living and expanding
> system using Selenium gives me some pause. Perhaps when the team decides
> that not all new acceptance tests need to be fully end-to-end, there might
> be other ways to handle that - perhaps replace the UI adapter with
> something else, say a headless browser like HtmlUnit or something.
> >>>>>
> >>>>> And as I write this, it also occurs to me that if in fact a large
> webapp grows over several years, then perhaps this is an indicator to break
> up that system into smaller webapps or systems or other units.
> >>>>>
> >>>>>
> >>>>> Am I off my rocker?
> >>>>>
> >>>>> Dan
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Sat, Jun 23, 2012 at 10:19 AM, J. B. Rainsberger <
> jbrain...-***@public.gmane.org> wrote:
> >>>>>> I find success in combining a few end to end tests for basic flow
> and perhaps one key error flow with exhaustive tiny isolated tests.
> >>>>>>
> >>>>>> If I'm not sure where I'm going, then I typically start with end to
> end tests, then when they start to duplicate themselves too much, I dive
> down.
> >>>>>>
> >>>>>> I slowly move end to end tests into a system test suite, then
> replace them with isolated tests to check each layer more carefully.
> >>>>>>
> >>>>>> I principally avoid exhaustive end to end tests to show basic
> correctness; I save them for system level problems, like response time or
> scaling.
> >>>>>>
> >>>>>> J. B. Rainsberger :: jbrains.ca
> >>>>>>
> >>>>>> On Jun 23, 2012 9:50 AM, "Rick Pingry" <rpi...-***@public.gmane.org> wrote:
> >>>>>>> Thanks Steve, this begins to answer some of the questions I have
> been having lately as well. I just watched your "Fractal TDD" video, and
> it touched on this quite a bit
> >>>>>>>
> >>>>>>> On a recent project, I had it quite easy in that I did not have
> any real external dependencies to worry about. It was actually a console
> application. As described by GOOS, for every new feature I was wanting to
> add, I would write an end-to-end test that exercised the feature, and then
> I would drive down to where the classes were implementing things all the
> way down (like tortoises). I was not sure about this because at each level
> I felt like I was kind of duplicating tests. Because there were tests at
> each level, I felt like the tests were too coupled with my design.
> >>>>>>>
> >>>>>>> Then I started thinking that maybe I just needed 2 tests, one on
> the very top, an acceptance test that shows an example of feature, then
> ones right where there were actually changes happening.
> >>>>>>>
> >>>>>>> So, first question... Is this right? It felt better. You are
> saying that you should just write the full end-to-end acceptance test on a
> few of the main paths, like the main "Happy Cases", or the ones that
> provide the most risk, then for the others drive just a little deeper so
> you don't have to take everything through the brittle outter layers, past
> your ports and adapters to what I would call "edge-to-edge" rather than
> "end-to-end".
> >>>>>>>
> >>>>>>> On another project I was working on, this was much more the case.
> I would actually test through a mocked version of my "port" to test
> "edge-to-edge". I wondered at that time how much the interface of my port
> should resemble the external resource I was adapting. From what I have
> seen, you are saying it should not, it should more reflect the needs of
> your domain. So, for my second question...
> >>>>>>>
> >>>>>>> As I worked this way, I would quite often find code that WANTED to
> live in this boundary between my domain classes and my adapters, or in the
> adapter itself. This adapter area became dangerous ground. I had a hard
> time budding off classes in between in a tested way. I actually struggle
> with budding off in-between classes pretty badly in general. All of my
> tests seem to be so tied to the interfaces of their collaborators that to
> bud something in-between makes everything break. It is probably a more
> systemic problem now that I think of it. Anyway, it is a real problem in
> particular for this adapter area because as I learn more about the external
> resource I am using and what I am required to do with it, code collects in
> this adapter layer that seems untestable. Perhaps that is where I need to
> be doing "integration tests" that I have been avoiding?
> >>>>>>>
> >>>>>>> On Friday, June 22, 2012 6:36:26 AM UTC-6, Steve Freeman wrote:
> >>>>>>>>
> >>>>>>>> That sounds reasonable.
> >>>>>>>>
> >>>>>>>> The trick is not to do too much through the top-level tests,
> otherwise they grow out of control. In your case, probably there would be a
> smaller number of top-level tests that show the main paths through the
> system: type in a name, show what happens when a result is or is not found.
> Then I'd probably have more detailed testing lower down to exercise the
> components that did the phonetic look up, and the translation to
> coordinates.
> >>>>>>>>
> >>>>>>>> S
> >>>>>>>>
> >>>>>>>> On 22 Jun 2012, at 12:47, Claudio Santos wrote:
> >>>>>>>>> I work with Geographic Information Systems, and most of all
> system I
> >>>>>>>>> develop have a strong link with user interface.
> >>>>>>>>>
> >>>>>>>>> The doubt is: when i said that my system must show a geographic
> location
> >>>>>>>>> through a phonetic search.
> >>>>>>>>>
> >>>>>>>>> I will start my develop, creating a test that valid if a user
> search, show
> >>>>>>>>> the correctly geolocation on the map.
> >>>>>>>>>
> >>>>>>>>> Through the creation of this test, i will create a web page with
> a test
> >>>>>>>>> box, then the search will fail, then i implement a test for
> phonetic search
> >>>>>>>>> until this not fail anymore, after that i will creating my
> application step
> >>>>>>>>> by step, ever failing some functionality on user interface.
> >>>>>>>>>
> >>>>>>>>> Is this the right concept?
> >>>>>>>>>
> >>>>>>>>> About the rules I have mentioned, i believe that most of then
> are around
> >>>>>>>>> the best pratices to develop Object Oriented SW, ever
> considering the
> >>>>>>>>> cohesion and coupling.
> >>>>>>>>>
> >>>>>>>>> Very tks for the answer...
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Friday, June 22, 2012 5:32:23 AM UTC-3, Steve Freeman wrote:
> >>>>>>>>>>
> >>>>>>>>>>> I'm reading the book, and the main doubt is, how to start a
> software
> >>>>>>>>>> development?
> >>>>>>>>>>>
> >>>>>>>>>>> I would to know if I must start to write a test using selennium
> >>>>>>>>>> framework, because the systems that I developed ever satisfy an
> users that
> >>>>>>>>>> need to put some information through some text boxs.
> >>>>>>>>>>
> >>>>>>>>>> that might be a good start. What would you like to show that
> the system
> >>>>>>>>>> can do?
> >>>>>>>>>>
> >>>>>>>>>>> Is this a main rule that all of us must to follow?
> >>>>>>>>>>
> >>>>>>>>>> there are few rules that everyone must follow.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> Steve Freeman
> >>>>>>>>
> >>>>>>>> Winner of the Agile Alliance Gordon Pask award 2006
> >>>>>>>> Book: http://www.growing-object-oriented-software.com
> >>>>>>>>
> >>>>>>>> +44 797 179 4105
> >>>>>>>> Twitter: @sf105
> >>>>>>>> Higher Order Logic Limited
> >>>>>>>> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> >>>>>>>> Company registered in England & Wales. Number 7522677
> >>>
> >>> --
> >>>
> >>> ---
> >>> You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> >>> To unsubscribe from this group and stop receiving emails from it, send
> an email to growing-object-oriented-software+unsubscribe-/***@public.gmane.org.
> >>> For more options, visit https://groups.google.com/d/optout.
> >>
> >>
> >>
> >> --
> >>
> >> ---
> >> You received this message because you are subscribed to the Google
> Groups "Growing Object-Oriented Software" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an email to growing-object-oriented-software+unsubscribe-/***@public.gmane.org.
> >> For more options, visit https://groups.google.com/d/optout.
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Growing Object-Oriented Software" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
> For more options, visit https://groups.google.com/d/optout.
>

--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriented-software+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/***@public.gmane.org
For more options, visit https://groups.google.com/d/optout.
Steve Freeman
2012-06-23 18:33:01 UTC
Permalink
On 23 Jun 2012, at 14:50, Rick Pingry wrote:
> On a recent project, I had it quite easy in that I did not have any real
> external dependencies to worry about. It was actually a console
> application. As described by GOOS, for every new feature I was wanting to
> add, I would write an end-to-end test that exercised the feature, and then
> I would drive down to where the classes were implementing things all the
> way down (like tortoises). I was not sure about this because at each level
> I felt like I was kind of duplicating tests. Because there were tests at
> each level, I felt like the tests were too coupled with my design.

there's a bit of a balance with multiple level tests. I often find that I move detailed testing down as I extract supporting objects. I try to make each test reflect the language of the object it's testing.

> Then I started thinking that maybe I just needed 2 tests, one on the very
> top, an acceptance test that shows an example of feature, then ones right
> where there were actually changes happening.
> So, first question... Is this right? It felt better. You are saying that
> you should just write the full end-to-end acceptance test on a few of the
> main paths, like the main "Happy Cases", or the ones that provide the most
> risk, then for the others drive just a little deeper so you don't have to
> take everything through the brittle outter layers, past your ports and
> adapters to what I would call "edge-to-edge" rather than "end-to-end".

I do that quite often, provided I have enough confidence I'm not missing other effects.

Remember that different tests serve different audiences: outward facing tests are about communicating to others, inward facing tests are for the dev team.


> On another project I was working on, this was much more the case. I would
> actually test through a mocked version of my "port" to test "edge-to-edge".
> I wondered at that time how much the interface of my port should resemble
> the external resource I was adapting. From what I have seen, you are
> saying it should not, it should more reflect the needs of your domain. So,
> for my second question...
>
> As I worked this way, I would quite often find code that WANTED to live in
> this boundary between my domain classes and my adapters, or in the adapter
> itself. This adapter area became dangerous ground. I had a hard time
> budding off classes in between in a tested way. I actually struggle with
> budding off in-between classes pretty badly in general. All of my tests
> seem to be so tied to the interfaces of their collaborators that to bud
> something in-between makes everything break. It is probably a more
> systemic problem now that I think of it. Anyway, it is a real problem in
> particular for this adapter area because as I learn more about the external
> resource I am using and what I am required to do with it, code collects in
> this adapter layer that seems untestable. Perhaps that is where I need to
> be doing "integration tests" that I have been avoiding?


At a guess, it sounds like it. I often find that I test adapters with integration tests because that is what's most relevant at that level. That has advantages because you can really stretch your adapter code without carrying the rest of the system, and get more immediate feedback.

S.
Loading...