\chapter{Evaluation} \label{ch:Evaluation} In this chapter the prototype is evaluated in terms of its functionality and its properties. All possible valid configurations will be generate for one use case i.e. all possible valid configurations for the forest use case. Generate groups with preferences (explicit preferences) and configuration state (which would be for example the currently existing forest). \section{Group Types During Evaluation} \label{sec:Evaluation:GroupTypes} \begin{itemize} \item Groups shall be generated with random preferences \item With grouped preferences: people adhere more or less to one profile (Forest Owner, Athlete, Consumer, Environmentalist) \item Group of only one profile type: rather homogenous group \end{itemize} \section{Metric} \label{sec:Evaluation:Metrics} For the evaluation a metric to evaluate by is needed. The proposed metric for usage is that of satisfactions. Satisfaction will be quantified by a threshold metric. A user's preference is used to calculate a rating for each possible solution. The score will be calculated using the average of a user's rating for each characteristic that is part of the solution. The result allows that a configuration can be compared to all other configurations and ranked according to the percentage of configurations that it beats. 50\% is used as base line and the parameter this metric accepts is something that will be called satisfaction mean distance $smd$. The users counting as satisfied with a solution find the solution to be better than $50\% + smd$ of all possible solutions. Respectively a solution that ranks among the lowest scored $50\% - smd$ is classified as unsatisfied. \section{Questions to Answer During the Evaluation} \label{sec:Evaluation:Questions} \begin{itemize} \item Main question: How does the satisfaction with a group decision differ from the decision of a single decision maker? \item How many group members are satisfied by the group decision on average? %\item Is the recommender fair, i.e. no user type is always worse off than others? (Just uses groupe preferences) \item How does the amount of stored finished configurations relate to recommendation satisfaction? \end{itemize} \section{Effect of Stored Finished Configurations} \label{sec:Evaluation:EffectFinishedConfiguration} When evaluating just a subset of stored finished configurations it is important to avoid outliers. This is the reason why a process inspired by cross validation is be used. The configuration database is randomly ordered and sliced into sub databases of the needed size. As an example, if the evaluated stored data size is 20, a configuration database containing 100 configurations is split into five sub databases of size 20. Now the evaluation is done on each of the sub databases and as a result the average is taken. \section{Generating Data} \label{sec:Evaluation:GeneratingGroups} The whole process explained in this section is visualized in \autoref{fig:Evaluation:GeneratingDataProcess}. \subsection{Generating Unfinished Configurations} Unfinished configurations are generated using all finished configurations and taking a subset of the contained characteristics. This way all generated configurations will be valid and lead to valid solutions. For the results that are presented in this chapter around $\frac{1}{7} \approx 15\%$ of characteristics is kept. \todo[inline]{why this paramter, elalobrate on that} \subsection{Generating Preferences} For the forest use case, the idea is that there are multiple types of user profiles. Each group profile is represented by a neutral, negative or positive attitude to an attribute value. Now during data generation the attitude is converted to a preference using a normal distribution. \autoref{fig:Evaluation:DataGeneration} shows how the user profile can be converted to preferences. \pgfplotsset{height=5cm,width=\textwidth,compat=1.8} \pgfmathdeclarefunction{gauss}{2}{% \pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}% } \begin{figure} \begin{tikzpicture} \begin{axis}[ every axis plot post/.append style={ mark=none, domain=0:1, samples=50, smooth }, axis x line*=bottom, xmin=0, xmax=1, ymin=0.1, xticklabel style={ /pgf/number format/precision=3, }, xtick={0,0.25, 0.5, 0.75,1}, hide y axis] \addplot [draw=red][very thick] {gauss(0.25,0.1)} node[text=red][above,pos=0.5] {negative}; \addplot [draw=blue][very thick] {gauss(0.5,0.05)} node[text=blue][above,pos=0.48] {neutral}; \addplot [draw=green!60!black][very thick] {gauss(0.75,0.1)} node[text=green!60!black][above,pos=0.5] {positive}; \end{axis} \end{tikzpicture} \caption{Distribution of preferences for a user type.} \label{fig:Evaluation:DataGeneration} \end{figure} These user profiles can be used to generate rather homogenous groups but also to create groups that have interests that are more conflicting. For completely random groups a uniform distribution is used to create more chaotic groups. The whole process is shown in \autoref{fig:Evaluation:GeneratingDataProcess}. \begin{figure} \centering \includegraphics[width=1\textwidth]{./figures/60_evaluation/bpmn_evaluation_input_data_generation.pdf} \caption{The process used for generating data for the evaluation.} \label{fig:Evaluation:GeneratingDataProcess} \end{figure} \section{Results} \label{fig:Evaluation:Results} \todo[inline]{explaining evaluations} \missingfigure{Result figure}