Logic programs are highly amenable to parallelization, and their level of abstraction relieves the programmer of many of the most difficult and error-prone details of parallel programming. However, tuning the performance of a parallel logic program is nontrivial. While working with programmers we noticed that they evolved strategies based on observed parallel performance. This paper illustrates some pitfalls inherent in this approach, using simple examples whose behaviour does not depend upon a particular task scheduling algorithm, and which are mostly non-speculative and therefore of general interest. It has two aims: to make parallel logic programmers more aware of such pitfalls, and to pose a challenge to future runtime analysis tools.