Welcome to the beach-litter section
Click on a tab below to get started.
If you have any questions contact us.
--> Read me first <--
If you can't define "beach-litter-inventory", then this is a good place to start.

Why is this important?
Beach litter and trash in the water is a fact of life.

Water is better without trash
A clean beach is a good thing, it is a guaranteed value.
The most obvious benefits from projects like this is the removal of trash from the environment.
Pragmatic action
A sure way to limit the damage
The data we collect may lead to a solution tomorow, but removing trash now is for today.
Management tool
How else are you going to know what is out there ?
If you intend on managing it, then you need to measure it.
Combined data 2015 to 2018
Descriptive statistics, maps, time-series charts. Categorized by city, water-body, or project.
Summary:
Summary of all data for Switzerland on record
Averages not weighted
Units= pcs/m
No of Samples: 1,098
No of locations: 136
No of rivers - lakes: 27 - 15
First sample date: Nov 2015
Most recent sample: Jul 2018
No of pieces of garbage: 140,542
Avg pieces/meter (pcs/m): 3.38 pcs/m
Standard deviation 6.94
25th percentile 0.41 pcs/m
75th percentile 3.35 pcs/m
Min - Max pcs/m : 0.0061 - 76.88
Location of beach liiter surveys:
Map of all beach litter surveys reported
Units= pcs/m, circle size relative to avg pcs/m
Use buttons below to select major categories
  Plot of all litter surveys 2015 to sep-2018:
Combined results of all volunteer surveys
Units= pcs/m, Zoom date with click and drag in chart area
Activate/deactivate items in the legend
The probability of garbage
Comparing year over year probability distributions of trash survey results on Lac Léman
Beach litter is a probability
How can we compare year over year results?
Log normal distribution (click to see how 'normal')
Allows for the application of standard analysis techniques

The scatter plot for Lac Léman 2015 - 2017 displays the results, grouped by year (Nov - Nov). There are a couple of differnces between the two sample groups

  • The samples for 2015 were collected by three - four people (hammerdirt staff)
  • The samples for 2017 were collected by many people from four distinct groups
  • The samples gathered by hammerdirt staff counted all trash with no lower size limit
  • The samples gathered by SLR volunteers did not count objects less than 2.5cm

Despite these differences the results vary only in maximum and minumum values. The criteria for counting (limiting the lower size limit) does not really effect the mid range of the results. Most of the results fall within 1.2 and 13 pcs/m.

It is safe to say that these numbers are minimum values. Furthermore we round the results to no less than 0.0001 pcs/m (we figure that anything beyond that is not signifigant)

The cutoff date for each year is November 15, thats the week the project started in 2015.

  Lac Léman two consecutive years of data
124 observations from 2015 - 2017
28 different locations, multiple individuals
Was there was a change from 2016 to 2017 ?

The distribution of the natural log (np.log()) of the results tells the same story. The 95th percentile for year one is a little further to the right and the 5th percentile for year two is a little further to the left.

  Year over year probability density function
Probability distribution of pcs/m
year one n=83, year two n= 41
What does this mean?

If you went down to the lake shore and collected and counted the garabgae, there was high probability that you would find the same amount year over year. Specifically the probability of finding less than 1 piece of trash per meter in 2016 was 3.1% and in 2017 it was 3%. So really no change at all. How is this calculated? -- check here

In the oven
Works in progress and most of the "mathy stuff" is done in a notebook. This is a collection of those notebooks.
Automated reporting
Communicating results in an objective manner
Standard format for all operations
Automated, exploitable at the time the record is created

This is the initial code used to generate standardized, automated reports for the SLR. Python (Pandas, Matplotlib, Jupyter) are the main tools. Output is PDF or any image file (see below)

This script will be revived and the function offered to participating organisations.

Links to a repository on Github

A rating scheme for beach litter based on probability
Using the probability distribution of litter to rate beaches
Based on the Log normal distribution
A beaches average pcs/m result is comapred to the distribution

It is impossible to determine how much trash is really out there. However we can determine how likeley you are to find a quantity of trash at a beach. We use that to rate beach litter at sites in the SLR.

Links to a repository on Github

Image: Rating of SLR beaches
Rating based on beaches avg pcs/m quantile ranking.
Four classifications
Comparing results across similar locations
Answering the question: "How do the results of Swiss lakes compare to each other"
Indentify specific categories for intervention
Identify items that may have been misidentified

There are regional differences and similarities that can be identified by comparing the results of specific items and or groups of items that are found in similar locations. Here we compare the pcs/m results of objects with different origins on Swiss lakes


Click here to see the notebook
opens to github
How much trash is out there?
Using results from the surveys to estimate quantity
Random sampling from a truncated gaussian distribution
Random sampling from actual results (with replacement)

Somebody asked, so I did it. How many pieces of trash are on the shores of Swiss water ways?

Links to a repository on Github

We have grants for organizations with unfunded requirements. Contact us for more information.