🍽 🥨 MensaarLecker -- A beloved tool to find out Mensa Ladies' favourite menu using Selenium🥨 🍽

🍽 🥨 MensaarLecker -- A beloved tool to find out Mensa Ladies' favourite menu using Selenium🥨 🍽

Repository: MensaarLecker

As an UdS Student,
Are you tired of seeing french fries🍟 3 times a week, or wondering when I can have the best pizza 🍕 in the Mensacafe?
MensaarLecker aims to collect all the data from Menu 1, 2, and Mensacafe to trace your favourite, or Mensa Ladies’, favourite menu!


🆕 Updates

05.08 – Telegram Bot @Mensaar_Bot are published.

(See my development blog in here! MensaarLecker Development Log 3 – Telegram Bot Deployment and Integration)

04.21 – HTW menus are now added to the statistics.


🥗 Description

A fully automated scraper and static website for the Saarbrücken Mensa, powered by Python, Selenium, Google Sheets, and GitHub Actions.

Read more
MensaarLecker Development Log (2) -- Web Developing and GitHub Workflow

MensaarLecker Development Log (2) -- Web Developing and GitHub Workflow

This blog post is trying to tell you:

  • My personal experience when developing a web crawler using Selenium
  • Explained with examples from my Repository: MensaarLecker

Fetching Data from Web Development

Previous post: MensaarLecker Development Log (1) – Web Crawling

Continuing from last post, we have already implemented a script that collect the Mensa menu and stored it on Google Sheets. It is time to build our web interface to connect the database.

Fetch Data from Google Sheets using Publish

First, we need to publish our spreadsheet so that it is public to fetch the data.

  1. In the Spreadsheet, click Share → Change access to Anyone with the link.

  2. Click FileSharePublish to the web.

  3. Select Entire DocumentComma-separated values (.csv) and click Publish.

  4. Copy the public CSV link.

menu.py
1
2
3
4
5
6
7
8
9
10
11
SCRIPT_URL = {PUBLISH_LINK}

# Fetch JSON data
def fetch_menu():
try:
response = requests.get(SCRIPT_URL)
response.raise_for_status() # Raise error if bad response
return response.json()
except requests.exceptions.RequestException as e:
print(f"❌ Error fetching menu: {e}")
return []

However, the script return no data, why?

caret.ebnf
1
2
3
Access to fetch at 'https://docs.google.com/spreadsheets/...' from origin 'null' has been blocked 
by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Read more
MensaarLecker Development Log (1) -- Web Crawling

MensaarLecker Development Log (1) -- Web Crawling

This blog post is trying to tell you:

  • My personal experience when developing a web crawler using Selenium
  • Explained with examples from my Repository: MensaarLecker

Motivation

Me and my friends hatelove the UdS Mensa so much! The infinite frozen food and french fries menus give us so much energy and motivation for the 5-hour afternoon coding marathon. However, no one actually knows how many potatoes they have exterminated throughout the week. We have a genius webpage created by some Schnitzel lover. Personally, I like its minimalistic layout and determination on Schnitzel searching.

However, we want more.

It’s not just Schnitzel; we want to know everything about their menu. We want to know what’s inside the mensa ladies’ brains when they design next week’s menu.

The desire never ends. We need more data, more details, more, More, MORE!

Developing Process

Our Goal here is simple:

  1. Scrape the Mensa menu every weekday and store it to Google Sheets

  2. Fetch the Data Collection from Google Sheets and update the website

Web Scraping

Read more
Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×