diff options
Diffstat (limited to 'Testing_Reference/en-US/Introduction.xml')
-rw-r--r-- | Testing_Reference/en-US/Introduction.xml | 84 |
1 files changed, 81 insertions, 3 deletions
diff --git a/Testing_Reference/en-US/Introduction.xml b/Testing_Reference/en-US/Introduction.xml index c6b1bdd..20456ab 100644 --- a/Testing_Reference/en-US/Introduction.xml +++ b/Testing_Reference/en-US/Introduction.xml @@ -3,9 +3,87 @@ <!ENTITY % BOOK_ENTITIES SYSTEM "Testing_Reference.ent"> %BOOK_ENTITIES; ]> -<chapter> - <title>Overview of Tests</title> +<chapter id="chap-Testing_Reference-Introduction"> + <title>Introduction</title> <para> - para + Bugzilla's Testopia extension, along with the Kolab Python utility suite, is leading in the way Kolab executes its testing. Note that there are many components that require testing, and many deployment scenarios (<emphasis>environments</emphasis>) need to be replicated for accurate representation of test results. </para> + <para> + Collecting those test results, and reporting based on these results, is performed through Bugzilla's Testopia extension. Reporting is important, because the premature identification of potentially problematic areas or sour spots enables us to shift attention and resources to that area and give it the appropriate amount of attention before any such problems hit any of our consumers. + </para> + <section id="sect-Testing_Reference-Introduction-Bugzillas_Testopia"> + <title>Bugzilla's Testopia</title> + <para> + In Testopia, a <emphasis>test plan</emphasis> attached to a Bugzilla product. Each test plan contains a number of <emphasis>test cases</emphasis>, each case representing a single action to be taken of which the results can be measured. + </para> + <para> + These plans enable concise testing of a single product component, such as the <application>Kontact</application>, but of course the outcome of the tests is not dependent solely on that single product component –Kolab is a complete environment of interconnected, smaller components. As such, the test cases in each plan are ran against a <emphasis>build</emphasis> of the software being tested, said software being deployed into an <emphasis>environment</emphasis>. + </para> + <para> + These builds and environments, as well as the runs of test cases against said builds and environments can be organized and tracked in Testopia. + </para> + <para> + Bugs may be created from failed tests, and test cases may be created from bugs. + </para> + <para> + Actually executing each test case for each build against different environments, however, is a different story; Manually preparing the environment for a test and manually executing the test cases is inefficient, boring, prone to error and not reproducible –thus painstaking, expensive and not even proper testing. + </para> + + </section> + + <section id="sect-Testing_Reference-Introduction-Kolabs_PyKolab"> + <title>Kolab's PyKolab</title> + <para> + Amongst other things, we try to automate as much testing as possible with the Kolab Python utilities. Test plans are attached to what we call a <emphasis>suite</emphasis> –a single suite can contain multiple relevant products–, each suite containing a number of sets (plans) of tests (cases). + </para> + <para> + The suites are suites and not products, because a single suite may contain tests targetted at more then one product component. Noted though, most suites indeed target only one product. + </para> + <para> + Each "plan", or series of tests, sort of corresponds with an iteration of a reset of the environment. Suppose user/a@b.com wants an extra folder, of course we don't need to reset the complete environment. However, should user a@b.com no longer exist. Additionally, "plans" form a means of categorization. + </para> + <para> + Each "case", or test, represents a single step on our way to completing the report on a Testopia test case, or a fully automated series of steps completing a Testopia test case. The latter obviously assumes the former has been performed before or is trusted to work. + </para> + + </section> + + <section id="sect-Testing_Reference-Introduction-Obtaining_the_Source_Code_for_PyKolab"> + <title>Obtaining the Source Code for PyKolab</title> + <para> + To obtain a copy of PyKolab's source code, please execute the following: + </para> + <para> + +<screen>$ <userinput>git clone git://git.kolab.org/git/pykolab</userinput></screen> + + </para> + <note> + <title>Software Requirements</title> + <para> + You will need <application>autoconf</application> and <application>automake</application> installed in order to run PyKolab directly from source. No installation packages have been provided yet. + </para> + + </note> + <para> + Now, navigate into the source code repository root directory, and execute the following commands: + </para> + <para> + +<screen>$ <userinput>autoreconf -v &&./configure</userinput></screen> + + </para> + + </section> + + <section id="sect-Testing_Reference-Introduction-Configuration"> + <title>Configuration</title> + <para> + Configuration is performed through files similar to <filename>conf/kolab.conf</filename>, all in .ini format. We recommend you look at <ulink url="http://git.kolab.org/pykolab/tree/conf/kolab-test-example.conf"><filename>conf/kolab-test-example.conf</filename></ulink>, copy it, modify it and knock yourself out against your own test environment. + </para> + + </section> + + </chapter> + |