Welcome to the beach-litter section
Click on a tab below to get started.
If you have any questions contact us.
--> Read me first <--
If you can't define "beach-litter-inventory", then this is a good place to start.

Why is this important?
Beach litter and trash in the water is a fact of life.

Water is better without trash
A clean beach is a good thing, it is a guaranteed value.
The most obvious benefits from projects like this is the removal of trash from the environment.
Pragmatic action
A sure way to limit the damage
The data we collect may lead to a solution tomorow, but removing trash now is for today.
Management tool
How else are you going to know what is out there ?
If you intend on managing it, then you need to measure it.
Combined data 2015 to 2018
Descriptive statistics, maps, time-series charts. Categorized by city, water-body, or project.
Summary of all data for Switzerland on record
Averages not weighted
Units= pcs/m
No of Samples: 1,100
No of locations: 138
No of rivers - lakes: 0 - 17
First sample date: Nov 2015
Most recent sample: Jul 2018
No of pieces of garbage: 139,745
Avg pieces/meter (pcs/m): 3.35 pcs/m
Standard deviation 6.92
25th percentile 0.40 pcs/m
75th percentile 3.33 pcs/m
Min - Max pcs/m : 0.0000 - 76.88
Location of beach liiter surveys:
Map of all beach litter surveys reported
Units= pcs/m, circle size relative to avg pcs/m
Use buttons below to select major categories
  Plot of all litter surveys 2015 to jul-2018:
Combined results of all volunteer surveys
Units= pcs/m, Zoom date with click and drag in chart area
Activate/deactivate items in the legend
The probability of garbage
Comparing year over year probability distributions of trash survey results on Lac Léman
Beach litter is a probability
Understanding how trash gets in the water
Log normal distribution
Allows for the application of standard analysis techniques

The scatter plot for Lac Léman 2015 - 2017 displays the results, grouped by year (Nov - Nov). There are a couple of differnces between the two sample groups

  • The samples for 2015 were collected by three - four people (hammerdirt staff)
  • The samples for 2017 were collected by many people from four distinct groups
  • The samples gathered by hammerdirt staff counted all trash with no lower size limit
  • The samples gathered by SLR volunteers did not count objects less than 2.5cm

Despite these differences the results vary only in maximum and minumum values. The criteria for counting (limiting the lower size limit) does not really effect the mid range of the results. Most of the results fall within 1.2 and 13 pcs/m.

It is safe to say that these numbers are minimum values. Furthermore we round the results to no less than 0.0001 pcs/m (we figure that anything beyond that is not signifigant)

The cutoff date for each year is November 15, thats the week the project started in 2015.

  Lac Léman two consecutive years of data
107 observations from 2015 - 2017
28 different locations, multiple individuals
Was there was a change from 2016 to 2017 ?

The distribution of the natural log (np.log()) of the results tells the same story. The 95th percentile for year one is a little further to the right and the 5th percentile for year two is a little further to the left.

  Year over year probability density function
Probability distribution of pcs/m
year one n=71, year two n= 36
What does this mean?

If you went down to the lake shore and collected and counted the garabgae, there was high probability that you would find the same amount year over year. Specifically the probability of finding less than 1 piece of trash per meter in 2016 was 3.1% and in 2017 it was 3%. So really no change at all. How is this calculated? -- check here

In the oven
Works in progress and most of the "mathy stuff" is done in a notebook. This is a collection of those notebooks.
Automated reporting
Communicating results in an objective manner
Standard format for all operations
Automated, exploitable at the time the record is created

This is the initial code used to generate standardized, automated reports for the SLR. Python (Pandas, Matplotlib, Jupyter) are the main tools. Output is PDF or any image file (see below)

This work will be revived and the function offered to participating organisations. However there is still alot of work that needs to be done.

Links to a repository on Github

A rating scheme for beach litter based on probability
Using the probability distribution of litter to rate beaches
Based on the Log normal distribution
A beaches average pcs/m result is comapred to the distribution

It is impossible to determine how much trash is really out there. However we can determine how likeley you are to find a quantity of trash at a beach. We use that to rate beach litter at sites in the SLR.

Links to a repository on Github

Image: Rating of SLR beaches
Rating based on beaches avg pcs/m quantile ranking.
Four classifications
Comparing probability distributions
Answering the question: "What will you find at the beach?"
Comparing the PDF for pcs/m across categories and items
Hotel nights sold as a proxy for changes in pop density --- Montreux

There are regional differences that can be identified by comparing the distributions of specific items. Even with a lower population density, Montreux still has an elevated pcs/m rating. Hotel nights sold per month can account for some of that.

Links to a repository on Github

Image: pcs/m results, lake vs rivers SLR
Logarithmic scale (np.log())
Only results greater than zero
How much trash is out there?
Using results from the surveys to estimate quantity
Random sampling from a truncated gaussian distribution
Random sampling from actual results (with replacement)

Somebody asked, so I did it. How many pieces of trash are on the shores of Swiss water ways?

Links to a repository on Github