From 933e6f8317d87b97c016ead87ddb44cf4e3790ba Mon Sep 17 00:00:00 2001 From: "hannes.kuchelmeister" Date: Thu, 23 Apr 2020 12:27:41 +0200 Subject: [PATCH] remove part in scoring function description --- 30_Thesis/sections/40_concept.tex | 5 ----- 1 file changed, 5 deletions(-) diff --git a/30_Thesis/sections/40_concept.tex b/30_Thesis/sections/40_concept.tex index 92f0fd8..ac0eb87 100644 --- a/30_Thesis/sections/40_concept.tex +++ b/30_Thesis/sections/40_concept.tex @@ -229,11 +229,6 @@ where $aggr$ the aggregation function and $score_{user}(P_i, s)$ the configurati The example in \autoref{fig:Concept:ForestExample} contains two users. The first user has preferences for the characteristic \emph{manual} of the feature with $0.8$ and the characteristic \emph{harvester} of the same feature with $0.3$. All other characteristics have a preference of $0.5$. The second user's preferences are $0.5$ for all characteristics. The finished configuration that is supposed to be rated in this example contains the characteristics \emph{low} for each feature except for \emph{effort} and \emph{quantity} which are set to \emph{manual} and \emph{high}. The score fore the finished configuration $S_F$ of user one is $0.54$. This score is the average of all seven features. User one rates all characteristics of all features as $0.5$ except two characteristics for \emph{effort}. Therefore all, feature scores for this user are $0.5$ except the score for \emph{effort} is $0.8$ because of the user's preference of $0.8$ for the characteristic \emph{manual}. The resulting average score per feature of $0.54$ is the user's score for this configuration. User two rates all characteristics with $0.5$ therefore the resulting average is $0.5$. The group configuration score is dependent on the used aggregation strategy. Multiplication results in a score of $0.54 \cdot 0.5 = 0.27$. The score for average is $\frac{1}{2}(0.54 + 0.5) = 0.52$ and for least misery $\min \{0.54, 0.5\} = 0.5$. -The second simpler scoring function approach is to use the the preference for each characteristic that is part of the configuration and then use the average. This approach is more transparent because the preference of a user is directly translated into the oscore and no weighting is done. It means that a configuration score is more simple to understand and to calculate. However, if needed, for example to give one group member more power, it allows relative weighting, too. This can be done with preprocessing of preferences. Moreover, an approach like this ensures that through preprocessing feature weights can be added \todo[]{das kann ich doch bei der anderen Funktion auch zB nachdem ich das Scoring für eine einzelne Ausprägung berechnet habe}. It is therefore possible that a user gives different importances to features. Also, other means of weighting ratings is possible. For example the ratings of one group member who has more knowledge in an area can be increased by multiplication with a factor or alternatively the preferences for all other users can be decreased. -The example above would not result in different feature scores for $P_1$ and $P_2$. Both would result in a score of $0.9$. Therefore there is a more direct link between a users preference and the score. - -The simplicity of the second approach in combination with transparency is why it is the approach that will be used in further chapters in this thesis \todo[]{warum hast du die erste Funktion dann erklärt?}, especially as trust in a recommendation system is important. - \subsubsection{Cofiguration Change Penalty} \label{subsubsec:Concept:SolutionGeneration:ScoringFunction:Penalty}