Fairness or anger in Ultimatum Game rejections?

Fairness or anger in Ultimatum Game rejections?

Simon Knight1

1 University of Leeds, United Kingdom sjgknight@gmail.com

Abstract

Guth, Schmittberger and Schwarze's (1982) ultimatum game result is replicated with mean earnings of L59.98 (N = 51) S.D. = L11.45, from a possible L80, and a linear relationship between offer size and acceptance rate. Results indicate a significant interaction effect between offer size and response, F(3, 31) = 3.69, p < 0.05 on response time. Our novel adjustment introduced the proposer's most "common offer" to responders. Results were in accord with prior work (Knez & Camerer,1995); social comparisons between the participant, and a hypothesised responder - the receiver of the 'common offer' &dash; were made only at mid-range offers (L2), for which low common offers were accepted more from proposers making low common offers than high t(45) = 3.28, p< 0.05.

Keywords: Ultimatum game, neuroeconomics, behavioural economics, social comparison, decision making

Introduction

The ultimatum game (Guth, Schmittberger, & Schwarze, 1982) is a one-shot two player economic game. The standard experimental set up involves one participant being assigned the role of 'proposer' and the second, 'responder'. The proposer is told to divide a set amount of money, typically $10, with the responder. The responder may either accept or reject these offers &dash; if they accept, then the money is split according to the offer; if they reject then neither participant receives any money.

Game theory predicts that any non-trivial offer will be accepted by responders, and that proposers will make very small offers. However, experimenters consistently find that offers of under approximately 20% are rejected about 50% of the time, and proposers tend to make offers of 40-45%. Responders seem to show a consistent willingness to forfeit potential gains to make spiteful rejections &dash; by rejecting low offers in order to punish unfair proposers (Levine, 1998).

This result has been replicated across cultures with some success, particularly in student populations (Henrich et al., 2005).Varying the monetary stakes (e.g. from $10 to $100) does not produce any extensive change toward the above mentioned game theoretic expectations (Slonim & Roth, 1998; Cameron 1999; Fu, Kong, & Yang,2007; Hoffman, McCabe, & Smith, 1996; List & Cherry, 2000).

Evidence suggests these rejections stem from emotional responses, with acceptance stemming from goal maintenance (Sanfey, Rilling, Aronson, Nystrom, & Cohen, 2003).The implication is that poor offers are emotion inducing; they are painful in a sense (O'Connor, De Dreu, Schroth, Barry, Lituchy, & Razerman, 2002; Harlé & Sanfey, 2007; van't Wout, Kahn, Sanfey, & Aleman, 2006), and that this effect is robust (Bosman, Sonnemans, & Zeelenberg, 2001). This emotion seems to stem from anger rather than a sense of injustice (Pillutla & Murnighan, 1996).

However, within this explanatory paradigm there is scope for participants to angrily reject unfair offers, while conforming to a fairness hypothesis, or to angrily punish unfair offers, under an envy hypothesis. A fairness hypothesis suggests that unfair offers are rejected because we wish to punish a proposer who would make an offer that violates the social norm of fairness &dash; the concern is with unfair offers in general. In contrast, the envy hypothesis which has some support (Kirchsteiger, 1994; Abbink, Bolton, Abdolkarim, & Fang-Fang, 2001) simply suggests that we punish those who would make us an unfair offer.

Evidence suggests that responder behaviour is mediated by social norms (Henrich, 2000; Burns, 2006), and can thus be altered, or 'learned' either by creating an artificially low 'average offer' which the participants are aware of, or via more general differences cross-culturally. In these circumstances the subjective 'unfair offer' is altered, such that a desire to punish is not induced and neither a fairness nor envy hypothesis can be supported.

In one novel modified ultimatum game (Bohet & Zeckhauser, 2004) the authors conclude that, "social comparisons activate the norm of equity: responders expect to be treated like others in like circumstances" (p.505). Their experiment explored the effect that knowledge of the average offer made had on participant responses. They found that participants expected to receive similar offers to 'everyone else' &dash; i.e. the average offer.

They suggest that this social comparison is indicative of a norm hypothesis &dash; that responders care about their payoffs, and whether they are receiving their just desserts. In comparison, the relative standing hypothesis states that responders care about the absolute amount of money they receive and their standing relative to other responders &dash; and should thus accept all non-trivial offers. Note again though, that the 'norm hypothesis' in this interpretation is interpretable both under envy and fairness, while the 'absolute standing' hypothesis is already discredited by the classic ultimatum game response of rejections of non-trivial offers.

Another study that analysed this interaction (Knez & Camerer, 1995) used a 3-player ultimatum game in which proposers made two offers simultaneously to two different responders, both of whom were aware of both offer sizes. They found that for about half of respondents there was a social comparison effect, and that generally these responders rejected offers more frequently if they were offered less than the other respondent.

However, in this study participants were not anonymous to each other. Furthermore the strategy method was employed, whereby participants state a minimally acceptable offer prior to the offer being made; and most significantly participants had 'outside options' &dash; even if they rejected offers, they still received some money. Others have found this to increase acceptance rates (Handgraaf, Dijik, Wilke, & Vermunt, 2003).

Indeed some participants stated they would accept offers of the same size as their outside option. Classically they should reject those offers, being indifferent in economic terms as both offers were the same size and under normal conditions wishing to punish the proposer by rejecting. Related to this concern is the issue that the responders already have a focal point for comparison in this study, as they have different 'outside options' &dash; this already creates a preliminary comparison between the responders. Furthermore, the use of actual proposers severely limits the ability to explore the range of responses to various offer kinds.

The most obvious social comparison is that between the proposer and the responder. Utilising this social comparison in an explanation, we would expect a responder who was adequately displeased with an offer to reject it in order that the large disparity between earnings (what they were offered, and the proposer's earnings) change to a smaller disparity &dash; even if this were as a result of losing earnings. Supporting this idea is research showing that participants are more likely to accept low offers when they do not know how much the proposer will earn, removing the opportunity for this social comparison (Croson, 1996; Straub & Murnighan, 1995).

In the limited social comparison research thus far, two comparisons have been compounded &dash; those between the responder and proposer, and those between one responder and another. The introduction of 'focal points' such that responders are aware of the average offer (modal offer) of each individual proposer, while receiving varying actual offers from them, should further reinforce the norm-hypothesis &dash; that responders punish those who make them individually inequitable offers. Importantly, it also allows for the analysis of responses under various conditions.

If the offer is held constant, then the proposer-responder comparison should be consistent, while the responder-responder comparison changes. Thus, data may be considered in terms of responses to offers made, and within that scope, in terms of the stated 'common offer' of each proposer. In this manner, the effect of different kinds of social comparison can be analysed.

This study seeks to analyse a range of these results using a modified ultimatum game paradigm to look at social comparison, and reaction time. We expect to replicate the standard ultimatum game with high rejection rates for low offers. We anticipate that acceptance of offers at different levels will vary significantly as a function of the proposer's purported 'common offer' &dash; which will be stated prior to their current offer being made. Differences will either reflect an envy or fairness hypothesis. An envy model predicts that participants envy high common offers when they do not receive these, and thus reject low offers made by proposers with high common offers. In contrast, a fairness model predicts that people reward fair prior offers, and thus are more likely to accept low offers from proposers with high common offers.1

We can expect to see three kinds of results - and these may all occur, as Knez and Camerer (1995) found:

1) Social comparison such that people only compare their earnings with the proposer's, giving the standard result, with no influence of 'common offer'.
2) Social comparison such that people compare their earnings with those under the common offer/modal offer, experience envy, and punish the proposer &dash; in keeping with an envy hypothesis. Support for this comparison will come from a higher rejection rate within offer sizes for high common offers than low common offers.
3) Fairness &dash; people reward modal high offers, even when they receive a low offer. This result requires a higher rejection rate within offer sizes for low common offers than high common offers. This result is interpretable as social comparison if one argues that responders perceive that proposers who have a high modal offer normally earn less, so the earnings discrepancy between them and the responder is smaller overall, even if it is larger for the particular offer.

In line with cognitive research, specifically the fMRI data, reaction time information will also be recorded. Previous studies recording reaction times in the ultimatum game (van't Wout, Kahn, Sanfey, & Aleman, 2005; Branas-Garza, León-Mejía, & Miller, 2007; Rubinstein, 2007) have not considered the decision making time to accept/reject an offer. However, given that the insula is implicated in rejection of low offers (Sanfey et al., 2003), we would expect these emotionally based responses to be faster. Correspondingly, accepting unfair offers may require mediation of the emotional responses of the insula by the higher-level, dorso-lateral prefrontal cortex; thus resulting in slower reaction times for these responses (Goldin, McRae, Ramel & Gross, 2008). One study that constrained the time frame in which participants were allowed to decide their response &dash; which we might consider as reducing their cognition time &dash; found higher rejection rates, although learning effects removed this result (Sutter, Kocher, & Strauss, 2003). We would expect to support this response by finding low offers rejected faster than they are accepted.

Method

Participants

Participants were 51 undergraduate students recruited via advertising comprising 12 males and 39 females. Participants were informed that a proportion of them would receive a payment for their participation, which was later explained in more detail (as below).

Of these reaction time data was collected for 46 participants2.

Materials

Materials were presented on 17'' CRT monitors. Ultimatum game materials were presented using E-Prime v.1.2 (Psychology Software Tools, Inc.). During the practice block cartoon faces (as in Figure 1(a)) were presented alongside a randomised selection of the top 8 Male and top 8 Female names in England and Wales for the year 1984 (Merry 1995) in the form "This is [Name]" to the left of the cartoon image. This was displayed until participants pressed a key to move on to the offer &dash; each offer (1-4) being presented twice over the procedure. A final screen displayed the total amount earned by the participant, automatically computed by the E-Prime program.

alt

Standardised black and white, 427 x 470 pixels (approx.) photos were created, as described in appendix 1, using GIMP v.2 for use in the experiment trials, as shown in Figure 1b.

These were displayed to the right hand side of the information, comprising a name, and the most common offer the proposer made. The female names were the top 9-24 names in 1984; male names were the top 9-26 excluding 20th (phonetically identical to 17th) and 21st (experimenter’s name).

The common offers were presented as an integer in the range 1-4, and were shown as an offer size, and as earnings for both responder and proposer if the responder accepted an offer of that size. Participants were also reminded that if they rejected an offer, neither they nor the proposer would receive any money for that turn.

Accept, Reject, and Rest keys of 32.5mm x 32.5mm size were made to affix to a standard keyboard, as shown in Figure 2. These were used as response keys, with participants pressing the appropriate key to respond to offers, and to move through the onscreen procedure. Where asked specifically to press the 'Rest' key, no other key would allow the participant to proceed.

alt

A Casio fx-82SX Fraction calculator with a random number generator was used to decide which participants to pay, and to compute the number of credits earned. Participants were paid when 0

Design

A two factor repeated measures design was employed; the factors of offer and common offer each had 4 levels, each possible combination of which was presented twice for a total of 32 trials. Both reaction time, and response data were collected.

Procedures

At recruitment participants were told that the 30-60 minute experiment was investigating the relationship between personality and behaviour in a game, and that 25% of participants would receive some money for taking part. All participants were given the same information and all were required to sign up to a timeslot prior to taking part, an important control (Sosis, 2005).

On attendance, participants were seated in separate rooms. Participants were asked to read an information sheet describing the ultimatum game, and informing them that the experiment conformed to ethical guidelines from the British Psychological Society (BPS).The sheet explained their position as a responder, whose task it was to respond to offers made by proposers by either accepting or rejecting them. They were told that 25% of participants would earn 10% of their game earnings, and that those recruited from the Psychology Department participant pool would receive Pool Credits to the rounded sum of their total earnings divided by 153. They were then given an opportunity to ask questions, which were answered with clarification or reference back to the text.

Participants completed a practice block consisting of 8 offers. They were instructed to use only the index finger of their dominant hand, to either accept or reject the offers. Response buttons were the 'f' and 'h' key (with responses keys affixed using glue). Between offers they were asked to keep their finger on the rest key ('g') which was situated between the response buttons. If participants were using more than one finger, or otherwise not following the instructions they were reminded of them.

Participants viewed a cartoon face, alongside a randomly assigned name. Participants were verbally instructed to press any key to move on. After they had done so, participants were reminded which keys to press, and were asked to press the 'Rest' key to continue to the offer. They then saw an offer of an integer 1-4 inclusive, each offer size was displayed twice (8 total). This number of practice trials increased the likelihood of rejections, allowing experimenters to observe and correct errors in key presses.

After the practice trials were complete, participants were informed onscreen of their earnings for the practice run, and asked if they had any questions. Questions were referred as in the initial introduction: either back to the instruction sheet or with clarification. When participants were ready to proceed, the main experiment was set up. Participants were instructed to attract the experimenter's attention on completion, and the experimenter left the room and closed the door.

Participants were introduced to the proposers in the following format, with standardised photographs displayed to the right of the text:

[Name] is a [Years] University student.
[Name]'s most common offer is £CommonOffer.
Thus, in the most common offer that [Name] makes, if the responder accepts then they receive £CommonOffer, and [Name] receives £8-CommonOffer.
If the responder rejects, then neither the responder nor [Name] receives any money.

A name was randomly chosen from a list matched to the gender of the photo and displayed with a randomly chosen year from the list "3rd" and "2nd". Common offers (integers 1-4) each appeared 8 times, twice with each kind of offer. The next screen informed the participant that the 3rd screen would present the proposer’s offer to them and reminded them how to respond. Participants were asked to press the 'Rest' key to proceed; the following screen presented the offer, again with a reminder of how to respond.

On completion, participants were informed of their earnings. A random 25% were paid 10% of their earnings, decided by using a random number generator. In addition, participants from the psychology pool also received their credits for participation. A debrief sheet detailing a website for a full debrief was given for participants to use after the experimenter had finished data collection.

Results

Descriptive results

Figure 3 shows the spread and mean of amounts earned. Mean earnings were £59.98 (N = 51), SD = £11.45. Modal earnings were £56. Analysis of the proposer: name; image; and year group profile, revealed no significant impact of these variables on results, thus the use of this methodology &dash; utilising posed proposers instead of real ones &dash; appears validated.

alt

Effect of Offer Size on Response

A repeated measures ANOVA was run on the mean reaction time for each offer size. Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of offer size, (χ2(5) = .57, p < .001) therefore Greenhouse-Geisser estimates of sphericity (ε = .75) were used to correct degrees of freedom. Results indicate reaction time was significantly affected by offer size, F(2.24, 100.75) = 10.15, p < .001, ω2 = 0.08. Contrasts indicate a quadratic trend in the relationship between offer size and reaction time, F(1, 45) = 17.24 p <.001, r = .53. Post hoc LSD tests show a significant mean difference between reaction time at offers of £1 (M = 1157.32ms, SD = 434.63ms, N = 46) and offers of: £2 (M = 1305.77ms, SD = 430.08ms, N = 46) (t(45) = 3.16, p = .003) and £4 (M = 993.92ms, SD = 268.65ms, N = 46) (t(45) = 3.30, p = .002). The difference between reaction times at offers of £2 and £3 (M = 1181.42ms, SD = 424.29, N = 46) was approaching significance (t(45) = 1.97, p = .055), and significance was reached between offers of £2 and £4 (t(45) = 5.68, p < .001) and between offers of £3 and £4 (t(45) = 3.79, p < .001).

alt

Effect of Offer and Response on Reaction Time

Ambivalence. By subtracting the larger number of either the number of acceptances of an offer or the number of rejections, from the total number of offers of a particular size (i.e. 8), we can compute an ambivalence score representing each participant's ambivalence towards offers of a particular size. The following formula was applied to the response data:

(8 &dash; x)/4 = ambivalence

Where x is the larger of the number of accepts or rejects. Ambivalence then is a number between 0 and 1 which may take the values: 0, 0.25, 0.5, 0.75 and 1. Thus, a high ambivalence score for an offer size implies ambivalence towards an offer &dash; i.e. offers of that size were equally rejected and accepted. Correspondingly, a low ambivalence score implies little ambivalence towards an offer &dash; at a score of 0 offers of that size were either all accepted or all rejected.

These individual offer size scores were then correlated with their respective reaction times for each offer. A significant positive correlation was found between ambivalence to offers of £2 and reaction time (r = .34, p = .02), and ambivalence to offers of £3 and reaction time (r = .33, p = .02) but not at offers of £1 (r = .18, p = .22) or £4 (r = .25, p = .09).

The influence of Common Offer

Acceptance rate. Figure 5 illustrates the small observed difference for responses to common offers independent of offer. Repeated measures ANOVA was run on the acceptance rate for each common offer size. Mauchly's test indicated that the assumption of sphericity had been violated for the main effect of offer size, (χ2(5) = .27, p< .001); therefore Greenhouse-Geisser estimates of sphericity (ε = .55) were used to correct degrees of freedom. Results indicate no significant difference in acceptance rates between common offer sizes, F(1.64, 73.70) = 1.46, p = .24.

alt

Repeated measures ANOVA was run on acceptance rate with the variables of offer size (£1-£3) and common offer level &dash; high (£3 and £4) or low (£1 and £2). Offers of £4 were excluded from this analysis as they are so rarely rejected. Results indicate the acceptance rate was significantly affected by offer size, F(2, 90) = 83.89, p < .001, and common offer level, F(1, 45) = 6.57, p = .01.

Results indicate the acceptance rate for offers was significantly affected by the interaction of offer size and common offer level, F(2, 90) = 5.94, p = .004.

Contrasts indicate a significant linear trend for the effect of offer size on acceptance rate F(1, 45) = 141.38, p <.001, r = .87, and for the effect of common offer level on acceptance rate F(1, 45) = 6.57, p = .014, r = .36. At mid-range offer sizes, the higher common offer is rejected more frequently. This effect size is small F(1, 45) = 1.70, p = .006, r = .19. Post hoc LSD analysis show significant differences in acceptance rates for offers of £1 and £2 (t(45) = 4.47, p < .001), £1 and £3 (t(45) = 11.89, p < .001), and £2 and £3 (t(45) = 8.69, p < .001). Post hoc comparison also shows a significant difference in acceptance rates between low and high common offers (t(45) = 2.56, p = .014).

Reaction time. Repeated measures ANOVA was run on response time with factors of offer size (£1-£3) and 'common offer level' &dash; high or low.

Mauchly's test indicated that the assumption of sphericity had been violated for the main effect of offer size, (χ2(2) = .78, p = .004), a nd the interaction of offer size and common offer level (χ2(2) = .84, p = .02); therefore Huynh-Feldt estimates of sphericity (ε = .84 and ε = .89 respectively) were used to correct degrees of freedom.

The results indicate response times were significantly affected by offer size, F(1.69, 76.05) = 3.31, p = .05. The effect of common offer level on response time did not reach significant levels, F(1, 45) = 3.65, p = .06. There was no significant interaction effect, F(1.79, 80.65) = .006, p = .99.

Contrasts indicate significant quadrilateral trend for the effect of offer size on response time F(1, 45) = 10.54, p = .002, r = .44. Bonferroni corrected post hoc t-tests revealed significant differences in reaction time between responses to offers of £1 and £2 (t(45) = 3.16, p = .003), but not between £2 and £3 (t(45) = 1.97, p = .055). Post hoc LSD showed no significant difference between reaction time at low versus high common offers (t(45) = 1.91, p = 0.63).

Break down. There was an observed difference in acceptance rates, within offers dependent on common offer, specifically for offers of £2. Pooling data from offers of £2 and common offers of £1 and £2 together and £3 and £4 together, a paired-samples t-test was conducted. This showed there was a significant difference between acceptance rates at high (M = 32.07%, SD = 38.61%, N = 46) and low (M = 50.00%, SD = 44.10%, N = 46) common offers in the £2 condition (t(45) = 3.28, p =0.002, r = .44).

Offers of £2 &dash; which show the largest discrepancy in acceptance rates &dash; also have the longest reaction times, as shown in Figure 6.

alt

Non-parametric correlation analysis was conducted to explore the relationship between the ambivalence scores regarding each offer with the two sub-levels of 'low common offer' and 'high common offer' and reaction times to those offer types. Ambivalence was computed as:

(4 &dash; y)/4 = ambivalence

Where y is the larger of the acceptance/rejection rate (as a 1-4 integer).

There were significant correlations between ambivalence and reaction times to offers of £2 with a low common offer (rs = .30, p = .04), offers of £2 with a high common offer (rs = .35, p = .02), offers of £3 with a low common offer (rs = 40, p = .006), offers of £3 with a high common offer (rs = .35, p = .02) and offers of £4 with a low common offer (rs = .40, p = .005).

Discussion

The standard ultimatum game result was replicated in this study, which introduced the novel aspect of 'focal points' via the use of common offers. Average earnings were somewhat lower than expected from previous research. Our data extends previous evidence suggesting offers of less than 20% are rejected 50% of the time &dash; which was replicated in this experiment under offers of £2 (25% of the pot) &dash; by showing that offers of £1 (12.5% of the pot) were rejected about 80% of the time. This extension explains the lower earnings profile.

It should be noted that the unusual occurrence of rejections of 50% splits (£4 offers) in this experiment is not unprecedented, and remarkably, evidence suggests this may occur even with splits of over 50% (Hennig-Schmidt, Li, & Chaoliang Yang, 2007). As expected, a clear linear relationship was shown between the size of the offer made, and the rejection rate for the offer.

Reaction Time

There was a moderate effect of offer size on reaction time. Reaction times correlated negatively with offer size, suggesting that responders think more about how they will respond to the lower offers, than the relatively easy to decide larger offers. The reaction times were significantly different between offers of £1-£3 and £4, suggesting that 50/50 split offers are very easily decided, whereas all other offers require more deliberation.

As expected, there was a moderate interaction effect of offer size and response on reaction time. It is apparent that there is a difference in RT between accepting and rejecting an offer, mediated by the size of the offer. For low offers, it is faster to reject than to accept an offer, and for high offers the reverse is true.

The data shows that those participants who both accepted and rejected offers at £2 and £3 had a much smaller difference in RT between accepting and rejecting. This may be indicative of a higher cognitive demand at these levels regardless of actual response, for those participants, contrasting with those participants who did not accept offers at £2, or reject them at £3 for whom no such demand existed.

Indeed, the observed effect of offer on reaction time, without considering response, may be partly explained by the number of either of the responses at particular offer sizes, explaining why offers of £2 took longer to respond to – 50% of the time these offers were accepted, taking a longer time, whereas the other 50% they were rejected. These offers show the most conflicted responses from the offer set.

Evidence that within subjects' variance is related to the subject's ambivalence towards the offer supports and extends the original hypothesis that low offers would be rejected quickly, and accepted slowly. This evidence suggests further than this, that regardless of the offer size, a measure of ambivalence may offer an insight into the overall reaction time for offers of that size. The implication then is that the involvement of different brain areas in the decisions to accept and reject is exhibited at a more subtle behavioural level than just in terms of rejection rates; a topic which should be further researched.

This research adds to the neuroeconomics evidence presented in the introduction regarding the neurological correlates of economic decision making, and the dual demand theory between emotional rejections and cognitive acceptances (Sanfey et al., 2003; O'Connor et al., 2002; Koenings & Tranel, 2007).

Common Offer

We can observe a relationship between the common offer size and RT, with high common offers producing a longer RT. Given the correlation between RT and ambivalence towards common offer types, this relationship may be explained by a higher ambivalence to those offers preceded by information regarding a high common offer. This provides tentative support for the 'envy hypothesis' discussed in the introduction.

Offers of £2 produced a marginally higher RT overall. This difference was particularly seen at offers of £2 with a common offer of £3, implying a larger cognitive drain at those offers. It appears that participants were most ambivalent to offers of this size &dash; accepting them roughly 50% of the time, but thinking about them for longer, and with a smaller difference in RT between accepting and rejecting the offers. It may be at these ‘ambivalent’ offers that the largest individual differences are displayed &dash; lower, and people reject frequently; higher and people accept frequently, so any effect size at these less ambivalent levels will be smaller than at £2.

An interpretation of the £2 result stemming from Knez and Camerer (1995) is possible here &dash; it might be that at least some responders compare themselves to those who have the closest comparable earnings. If offers are high, then that group would be the proposer (both earning something close to 50%), so common offer is less relevant to the decision. If offers are low, then the closest comparison group is other responders. If for example one receives an offer of 25% of the pot (£2 in this experiment), then the proposer would receive 75% of the pot. If that proposer normally offers responders 50%, then the common offer is much closer to the current 25% offer than the proposer’s 75% takings. This 50% common offer is also a much larger amount than the offer made, and for the envious responder, the 25% offer might be considered a personal snub worth punishing. If however, the proposer normally offers 25%, then the responder might see that generally they are receiving a comparable amount to other responders in that situation, and be willing to accept. It should also be noted that the number of trials these groups were based on was very small &dash; only 8 &dash; and thus the margin for error on assigning participants to behaviour categories is high; further research should aim to tackle this.

Two obvious methods to do this would first be to increase the number of trials, and secondly simply to ask participants to state their acceptable offer/common offer combinations and extrapolate data from that. This latter methodology raises a concern that the effect that this 'minimally acceptable offer and common offer' methodology would have on results is not clear, and it removes some variance in rejection rates which may not be random variance.

Further Research

Tentative support can be offered for the influence of common offer on participant behaviour. However, further research must be undertaken in order to understand the relationships which this project and others (Knez & Camerer, 1995; Bohnet & Zeckhauser, 2004) have explored.

Participants in our game were not explicitly asked to pay attention to the common offer, yet we gained some limited evidence that participants did take note of this information. A further adaptation of this experiment might include an addition to the 'common offer' variable of 'this is the first time this participant has played the game', or 'no common offer information is available for this participant', or some other similar filler to act as a control and to tease out the influence of common offer on response. Any experiment increasing the number of trials, including common offer data points, should attempt to record reaction times, and compute ambivalence scores at the offer type level, as opposed to the whole offer size level analysed here. The relationship between ambivalence as scored using our methodology and reaction times on this type level would be a further extension to both reaction time and social comparison research.

References

GIMP &dash; The GNU Image Manipulation Program. Available at: http://www.gimp.org/ [Accessed March 19, 2008].
Abbink, K., Bolton, G., Abdolkarim, S. & Fang-Fang, T. (2001). Adaptive Learning versus Punishment in Ultimatum Bargaining. Games and Economic Behavior, 37(1), 1-25. doi:10.1006/game.2000.0837
Bohet, I. & Zeckhauser, R. (2004). Social Comparisons in Ultimatum Bargaining. Scandinavian Journal of Economics, 106(3), 495-510. doi:10.1111/j.1467-9442.2004.00372.x.
Bosman, R., Sonnemans, J. & Zeelenberg, M. (2001). Emotions, Rejections, and Cooling off in the Ultimatum Game. Unpublished Manuscript.
Brañas-Garza, P., Le&acuteo;n-Mej&acutei;a, A. & Miller, L. (2007). Response Time under Monetary Incentives: The Ultimatum Game. Jena Economic Research Papers, 70.
Burns, K. (2006). The Influence of Social and Affective Information on Ultimatum Bargaining Behavior. Dissertation Abstracts International: Section B: The Sciences and Engineering, 67(1-B), 597.
Cameron, L. (1999). Raising the Stakes in the Ultimatum Game: Experimental Evidence from Indonesia. Economic Inquiry, 37(1),47-60. doi:10.1111/j.1465-7295.1999.tb01415.x
Croson, R. (1996). Information in ultimatum games: An experimental study. Journal of Economic Behavior & Organization, 30(2),197-212. doi:10.1016/S0167-2681(96)00857-8
Fu, T., Kong, W. & Yang, C. (2007). Monetary Stakes and Socioeconomic Characteristics in Ultimatum Games: An Experiment with Nation-Wide Representative Subjects. Unpublished Manuscript. Available at: http://www.fas.nus.edu.sg/ecs/events/set2007/programme.html.
Goldin, P. R., McRae, K., Ramel, W., & Gross, J. J. (2008). The Neural Bases of Emotion Regulation: Reappraisal and Suppression of Negative Emotion. Biological Psychiatry, 63(6), 577–586.
Guth, W., Schmittberger, R. & Schwarze, B. (1982). An Experimental Analysis of Ultimatum Bargaining. Journal of Economic Behavior & Organization, 3 (3),367-388. doi:10.1016/0167-2681(82)90011-7
Handgraaf, M., Dijk, E.V., Wilke, H. & Vermunt, R. (2003). The salience of a recipient’s alternatives: Inter- and intrapersonal comparison in ultimatum games. Organizational Behavior and Human Decision Processes, 90(1),165-177. doi:10.1016/S0749-5978(02)00512-5
Harl&acutee;, K. & Sanfey, A. (2007). Incidental Sadness Biases Social Economic Decisions in the Ultimatum Game. Emotion, 7(4),876-881. doi:10.1037/1528-3542.7.4.876
Hennig-Schmidt, H., Li, Z. & Yang, C. (2007). Why people reject advantageous offers—Non-monotonic strategies in ultimatum bargaining: Evaluating a video experiment run in PR China. Journal of Economic Behavior & Organization, 65(2), 373-384. doi:10.1016/j.jebo.2005.10.003
Henrich, J. (2000). Does Culture Matter in Economic Behavior? Ultimatum Game Bargaining among the Machiguenga of the Peruvian Amazon. The American Economic Review, 90(4), 973-979. doi:10.1257/aer.90.4.973
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., … Tracer, D. (2005). "Economic man" in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and Brain Sciences, 28, 795-855. doi:10.1017/S0140525X05000142
Hoffman, E., McCabe, K. & Smith, V. (1996). On Expectations and The Monetary Stakes In Ultimatum Games. International Journal of Game Theory, 25(3), 289-301.doi:10.1006/game.2000.0805
Kirchsteiger, G., (1994). The role of envy in ultimatum games. Journal of Economic Behavior & Organization, 25(3), 373-389. doi:10.1016/0167-2681(94)90106-6
Knez, M. & Camerer, C. (1995). Outside Options and Social Comparison in Three-Player Ultimatum Game Experiments. Games and Economic Behavior, 10(1),65-94. doi:10.1006/game.1995.1025
Koenings, M. & Tranel, D. (2007). Irrational Economic Decision-Making after Ventromedial Prefrontal Damage: Evidence from the Ultimatum Game. Journal of Neuroscience, 27(4),951-956. doi:10.1523/JNEUROSCI.4606-06.2007
Levine, D. (1998). Modeling Altruism and Spitefulness in Experiments. Review of Economic Dynamics, 1(3), 593-622. doi:10.1006/redy.1998.0023
List, J. & Cherry, T. (2000). Learning to Accept in Ultimatum Games: Evidence from an Experimental Design that Generates Low Offers. Experimental Economics, 3 ,11-29.
Merry, E. (1995). First Names &dash; The definitive guide to popular names in England and Wales 1944-1994 and in the regions 1994. London: HMSO.
O'Connor, K., De Dreu, C., Schroth, H., Barry, B., Lituchy, T. & Razerman, M. (2002). What We Want to Do Versus What We Think We Should Do: An Empirical Investigation of Intrapersonal Conflict. Journal of Behavioral Decision Making, 15(5), 403-418. doi:10.1002/bdm.426
Pillutla, M. & Murnighan, J. (1996). Unfairness, Anger, and Spite: Emotional Rejections of Ultimatum Offers. Organizational Behavior and Human Decision Processes, 68(3), 208-224. doi:10.1006/obhd.1996.0100
Psychology Software Tools, Inc. E-Prime. Available at: http://www.pstnet.com/info/about.htm [Accessed March 19, 2008].
Rubinstein, A. (2007). Instinctive and Cognitive Reasoning: A Study of Response Times. The Economic Journal, 117(523), 1243-1259. doi:10.1111/j.1468-0297.2007.02081.x
Sanfey, A., Rilling, J., Aronson, J., Nystrom, L. & Cohen, J. (2003). The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science, 300 (5656),1755-1758.doi:10.1126/science.1082976
Slonim, R. & Roth, A. (1998). Learning in High Stakes Ultimatum Games: An Experiment in the Slovak Republic. Econometrica, 66(3),569-596.
Sosis, R. (2005). Methods do matter: Variation in experimental methodologies and results in "Economic man" in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioural and Brain Sciences, 28, 795-855. doi: 10.1017/S0140525X05420142b
Straub, P. & Murnighan, J. (1995). An experimental investigation of ultimatum games: information, fairness, expectations and lowest acceptable offers. Journal of Economic Behavior & Organization, 27(3),345-364.doi:10.1016/0167-2681(94)00072-M
Sutter, M., Kocher, M. & Strauss, S. (2003). Bargaining under time pressure in an experimental ultimatum game. Economics Letters, 81(3),341-347. doi:10.1016/S0165-1765(03)00215-5
van't Wout, M., Kahn, R., Sanfey, A. & Aleman, A. (2005). Repetitive transcranial magnetic stimulation over the right dorsolateral prefrontal cortex affects strategic decision-making. NeuroReport, 16(16), 1849-1852.
van't Wout, M., Kahn, R., Sanfey, A. & Aleman, A. (2006). Affective state and decision-making in the Ultimatum Game. Experimental Brain Research, 169 (4),564-568. doi:10.1007/s00221-006-0346-5


1This is very similar to the game described in footnote 5, Knez and Camerer (1995).

2Due to a computer error, data was lost for 5 participants.

3In order for psychology students to utilise the computerised 'participant pool' (comprising all psychology students, on an online advertising and booking system) for their final year project they must gain a certain number of pool credits; typically gaining 1 credit per 30 minute study.


This article is published by the European Federation of Psychology Students Associations under Creative Commons Attribution 3.0 Unported license. CC-BY 3.0