I was working at a company full of PhDs and well seasoned veterans, who looked at me as a new kid, kind of underqualified to be working in their tools group. I had been at the firm for a while, and they were nice enough, but didn't really have me down as someone who was going to contribute as anything other than a very junior engineer.
We had a severe problem with a program's performance, and no one really had any idea why. And as it was clearly not a sophisticated project, I got assigned to figure something out.
I used the then very new callgrind and the accompanying flamegraph, and discovered that we were passing very large bit arrays for register allocation by value. Very, very large. They had started small enough to fit in registers, but over time had grown so large that a function call to manipulate them effectively flushed the cache, and the rest of the code assumed these operations were cheap.
Profiling tools at the time were quite primitive, and the application was a morass of shared libraries, weird dynamic allocations and JIT, and a bunch of other crap.
Valgrind was able to get the profiles after failing with everything else I could try.
The presentation I made on that discovery, and my proposed fixes (which eventually sped everything up greatly), finally earned the respect of my colleagues, and no phd wasn't a big deal after that. Later on, those colleagues who had left the company invited me to my next gig. And the one after that.
I have a very similar experience, but with a different profiling tool. When I first graduated from school and joined a big internet company, I'm not that "different". The serving stack was all in C++. My colleagues were really capable but not that into "tools", they'd rather depend on themselves (guess, tune, measure).
But I, as a fresh member in the team, learned and introduced Google perftools to the team and did a presentation of the breakdown of the running time of the big binary. I have to say that presentation was a life-changing moment in my career.
So together with you, I really want to thank those who devoted heavily into building these tools. When I was doing the presentation, I really felt standing on the shoulders of giants and those giants were helping me.
And over years, I used more and more tools like valgrind, pahole, asan, tsan.
I've mentioned this before on HN as a way for a "newbie" to look like a superhero in a job very quickly; nice to hear a story of it actually working!
There is so much code in the world that nobody has even so much as glanced at a profile of, and any non-trivial, unprofiled code base is virtually guaranteed to have some kind of massive performance problem that is also almost trivial to fix like this.
Put this one in your toolbelt, folks. It's also so fast that you can easily try it without having to "schedule" it, and if I'm wrong and there aren't any easy profiling wins, hey, nobody has to know you even looked. Although in that case, you just learned something about the quality of the code base; if there aren't any profiling quick wins, that means someone else claimed them. As the codebase grows the probability of a quick win being available quickly goes to 1.
Always find it weird when people berate C++ tooling, Valgrind and adjacent friends are legitimately best in class and incredibly useful. Between RAII and a stack of robust static analyzers you'd have to deliberately write unsafe code these days.
I love this story. I'm becoming an older dev now and I've often been blindsided by some insight or finding by juniors - it's really great to see & you've always got to make sure they get credit!
I’m surprised to see the attribution to the tools and not your proposed fixes. Sure the discovery was the first step in the order of operations, but can you elaborate on what enabled you to understand the problem statement and subsequent resolution?
I have a similar experience with xdebug for a PHP shop I used to work at.
It feels very similar to being a nerd back at school, rescuing peoples home work, and being rewarded with some respect.
I was introduced to valgrind by Andrew Tridgell during the main content of a vaguely famous lecture he gave that finished with the audience collectively writing a shellscript bitkeeper client [1] demonstrating beyond doubt that Tridge had not in any way acted like a "git" when bitkeeper's licenseholder pulled the license for the linux kernel community.
Tridge said words to the effect "if you program in C and you don't aren't using valgrind you flipping should be!" And went on to talk about how some projects like to have a "valgrind clean" build the same way they compile without warnings and that it's a really useful thing. As ever well expressed with examples from samba development.
He was obviously right and I started using valgrind right there in the lecture theatre. apt-get install is a beautiful thing.
He pronounced it val grind like the first part of "value" and "grind" as in grinding coffee beans. I haven't been able to change my pronunciation since then regardless of it being "wrong".
Corbett's account of this is actually wrong in the lwn link above. Noted by akumria in the comments below it. Every single command and suggestion came from the audience, starting with telnetting to Ted Tso's bitkeeper ip & port that he made available for the demo. Typing help came from the audience as did using netcat and the entire nc command. The audience wrote the bitkeeper client in 2 minutes with tridge doing no more than encouraging, typing and pointing out the "tridge is a wizard reverse engineer who has used his powers for evil" Was clearly just some "wrong thinking." Linus claimed thereafter that Git was named after himself and not Tridge.
I learned of the tool from a native German speaker who pronounced it wall-grinned, which is apparently half-right. Like latex, I can't keep the pronunciation straight from one sentence to the next.
I once submitted a bug fix for an obscure issue to valgrind. They asked for a test case, which I managed to provide, but I was a bit nervous as I couldn't immediately see how to fit in their test suite.
The response from Julian Seward was so nice it set a permanently high bar for me when random people I don't know report bugs on my projects!
We still run our entire testsuite under valgrind in CI. Amazing tool!
Valgrind is an amazingly useful tool. The biggest pain point, though, has always been to read through and process the huge amount of false positives that typically come from 3rd-party support libraries, such as GLib. It provides some suppression files to be used with Valgrind, but still, GLib has its own memory allocator, so things tend to go awry.
Running Helgrind or DRD (for threading issues) with GLib has been a bit frustrating, too. If anyone has some advice to share about this, I'm all ears!
(EDIT: I had mistakenly left out the phrase about suppression files)
Hah, I teach my students to use Valgrind, and I’ve been pronouncing it wrong this whole time. Guess I’ll have to make sure to get that right next semester :)
The magic of Valgrind really lies in its ability to detect errors without recompiling the code. Sure, there’s a performance hit, but sometimes all you have is a binary. It’s damn solid on Linux, and works even with the custom threading library we use for the course; shame the macOS port is barely maintained (last I checked, it only worked on OSes from a few years back - anything more recent will execute syscalls during process startup that Valgrind doesn’t handle).
There are times when LeakSanitizer (in gcc-8.2) would not give me the full backtrace of a leak, while valgrind would, so to me it's still an indispensable tool for debugging leaks. One caveat is that it's magnitudes slower than valgrind. Now, if only I know how to make valgrind run as fast as LeakSanitizer... (command line options?)
Happy birthday Valgrind. Next year you'll be able to drink in the US!
Being a UK PhD holder, a sentence stood out out to me was a commentary/comparison between UK and US PhDs:
"This was a three year UK PhD, rather than a brutal six-or-more year US PhD."
My cousin has a US PhD and judging from what he tells me. It is a lot more rigorous than UK PhDs.
I was working on an application for Symbian mobile phones and I was able to implement large parts of it as a portable library - the bits which compressed results using a dictionary to make them tiny enough to fit into an SMS message or a UDP frame. This was before the days of flat-rate charges for internet access and we were trying to be very economical with data.
I was able to build and debug them on Linux with Valgrind finding many stupid mistakes and the library worked flawlessly on Symbian.
It's just one of the many times that Valgrind has saved my bacon. It's awesome.
Beyond raw technical ability, Nick and Julian were the kindest, most reasonable developers I've ever interacted with. I think a lot of Valgrind's success stems from combination of sophisticated tech and approachability of the core team.
I am old enough that I started with Purify and I used Valgrind starting from the version 1.0, because Purify was commercial and Solaris only. It saved my behind multiple multiple times.
Purify was an amazing tool. I recently noticed that one of my libraries (libffi) still has an --enable-purify configure option, although it probably hasn't been exercised in.. 20 years? A Purify patent prevented work-alikes for many years, but valgrind eventually emerged as a more-than-worthy successor.
Fun fact: the creator of Purify went on to found Netflix and is still their CEO.
> Speaking of software quality, I think it’s fitting that I now work full time on Rust, a systems programming language that didn’t exist when Valgrind was created, but which basically prevents all the problems that Memcheck detects.
When we moved to Linux, Valgrind was THE tool that saved our as*s day after day after day. An issue in production? rollback, valgrind, fix, push, repeat. Thank you for all the hard work, in fact i don't i can thank you enough.
Sanitizers and electric fence are ultra portable, they're definitely available on macos. The feature set from valgrind is a bit richer but not by much.
valgrind is available on mac. From the homepage: "It runs on the following platforms: (...) X86/Darwin and AMD64/Darwin (Mac OS X 10.12).". There's a notable omission of ARM64/Darwin in there, and I don't think it's an oversight.
What Mac is definitely lacking, though, is reverse debugging. Linux has rr, Windows has Time Travel Debugging. macOS still doesn't have an equivalent.
One problem with Valgrind is that the thing you're debugging should have been tested with Valgrind from the start, otherwise you're just going to be flooded with false triggers.
Now imagine that you're developing a new application and you want to use some library, and it hasn't been tested with valgrind and generates tons of false messages. Should you then use it? Or look for an alternative library?
I see the article mentions Solaris, an OS that I am very familiar with, which had me thinking about the memory corruption detection Solaris offerred. Among the development features Solaris supported were two memory corruption checking libraries (libumem, watchmalloc) that could easily be used without have to recompile binaries to link with them. Libumem had support for detecting memory leaks, buffer overruns, multiple frees, use of uninitialized data, use of freed data, etc... but it could not detect a read past an allocated buffer which is where watchmalloc came in handy. To use either with an executable binary was as easy as:
$ LD_PRELOAD=libumem.so.1 <executable filename>
I found a lot of memory corruption bugs using libumem in particular including some in MIT Kerberos that were severe enough to be considered security vulnerabilities. Sadly, Solaris is now in support mode thanks to Ellison and friends at Oracle.
Memcheck decreases the memory safety problem of C++ by about 80% in my experience - it really is a big deal. The compiler-based tools that require recompiling every library used are a bit impractical for large stacks such as the ones under Qt-based GUI applications. Several libraries, several build systems. But I hear that they are popular for CI systems in large projects such as web browsers, which probably have dedicated CI developers. There are also some IME rare problems that these tools can find that Memcheck can't, which is due to information unavailable in compiled code. Still, Memcheck has the largest coverage by far.
Callgrind and Cachegrind give very precise, repeatable results, complementary to but not replacing perf and AMD / Intel tooling which use hardware performance counters. I tend to use all of them. They all work without recompiling.
Not sure if it was still doing it in 2001, but in the 1997-1998 time-frame Purify also ran on HP-UX. The company I was working for at the time used it and we ended up finding a two-byte (IIRC) leak in the HP gethostbyname() library call (well, at least I think it was gethostbyname, it's more than two decades ago).
That was one of the more annoying tickets to file. We could of course send them the binary, but it would not run without the Purify license file, and we weren't comfortable to send off the license file as well. But, in the end, they accepted the bug. Not sure if there was every any fix, though.
First of all congratulations to Valgrind and the team behind it! This is an essential tool that help me personally over the years while developing.
What needs to be done to get Valgrind binaries available for MacOS (M1) ?, from a company perspective we are happy to support this work. If you know who's interest and can accomplish this pls drop me an email to eduardo at calyptia dot com.
I still use Valgrind memcheck for memory leak verification of a large piece of code I have developed, with a long end-to-end test.
Also, it has a nice integration with Eclipse which reflects the Valgrind memcheck output to the source files directly, enabling you to see where problems are rooted.
> I still use Cachegrind, Callgrind, and DHAT all the time. I’m amazed that I’m still using Cachegrind today, given that it has hardly changed in twenty years. (I only use it for instruction counts, though. I wouldn’t trust the icache/dcache results at all given that they come from a best-guess simulation of an AMD Athlon circa 2002.)
I'm pretty sure I've seen people using the icache/dcache miss counts from valgrind for profiling. I wonder how unreliable these numbers are.
Cachegrind is used to measure performance because it gives answers that are repeatable to 7 or more significant digits. In comparison, actual (wall-clock) run times are scarcely repeatable beyond one significant digit [...] The high repeatability of cachegrind allows the SQLite developers to implement and measure "microoptimizations".
There's a bunch of ways for caches to behave differently but have they changed much over the past 20 years? i.e. is the difference between [2022 AMD cache, 2002 AMD cache] significantly greater than the difference between [2002 PowerPC G4 cache, 2002 AMD cache, 2002 Intel cache] ?
In my obviously biased opinion, very specialised, but sometimes exactly what you needed (I have used this in anger maybe 2-3 times in my career since then, which is why I wrote the C version):
All leakdice does is: You pick a running process which you own, leakdice picks a random heap page belonging to that process and shows you that page as hex + ASCII.
The Raymond Chen article explains why you might ever want to do this.
For the Clang static analyzer, make sure your LLVM toolchain has the Z3 support enabled (OK in Debian stable for example), and enable cross translation units (CTU) analysis too for better results.
I work full-time with Rust, use it all the time to see how much memory is being allocated to the heap, make a change and then see if there's a difference, and also for cache misses:
valgrind target/debug/rustbinary
==10173== HEAP SUMMARY:
==10173== in use at exit: 854,740 bytes in 175 blocks
Not used it with Rust, but have used it with OCaml, Perl, Ruby, Tcl successfully. In managed languages it's mainly useful for detecting problems in C bindings rather than the language itself. Languages where it doesn't work well: Python and Golang.
compiler-guy|3 years ago
I was working at a company full of PhDs and well seasoned veterans, who looked at me as a new kid, kind of underqualified to be working in their tools group. I had been at the firm for a while, and they were nice enough, but didn't really have me down as someone who was going to contribute as anything other than a very junior engineer.
We had a severe problem with a program's performance, and no one really had any idea why. And as it was clearly not a sophisticated project, I got assigned to figure something out.
I used the then very new callgrind and the accompanying flamegraph, and discovered that we were passing very large bit arrays for register allocation by value. Very, very large. They had started small enough to fit in registers, but over time had grown so large that a function call to manipulate them effectively flushed the cache, and the rest of the code assumed these operations were cheap.
Profiling tools at the time were quite primitive, and the application was a morass of shared libraries, weird dynamic allocations and JIT, and a bunch of other crap.
Valgrind was able to get the profiles after failing with everything else I could try.
The presentation I made on that discovery, and my proposed fixes (which eventually sped everything up greatly), finally earned the respect of my colleagues, and no phd wasn't a big deal after that. Later on, those colleagues who had left the company invited me to my next gig. And the one after that.
So thanks!
azurezyq|3 years ago
But I, as a fresh member in the team, learned and introduced Google perftools to the team and did a presentation of the breakdown of the running time of the big binary. I have to say that presentation was a life-changing moment in my career.
So together with you, I really want to thank those who devoted heavily into building these tools. When I was doing the presentation, I really felt standing on the shoulders of giants and those giants were helping me.
And over years, I used more and more tools like valgrind, pahole, asan, tsan.
Much appreciated!
jerf|3 years ago
There is so much code in the world that nobody has even so much as glanced at a profile of, and any non-trivial, unprofiled code base is virtually guaranteed to have some kind of massive performance problem that is also almost trivial to fix like this.
Put this one in your toolbelt, folks. It's also so fast that you can easily try it without having to "schedule" it, and if I'm wrong and there aren't any easy profiling wins, hey, nobody has to know you even looked. Although in that case, you just learned something about the quality of the code base; if there aren't any profiling quick wins, that means someone else claimed them. As the codebase grows the probability of a quick win being available quickly goes to 1.
intelVISA|3 years ago
LAC-Tech|3 years ago
dijonman2|3 years ago
There has to be a deeper understanding I think
nullify88|3 years ago
cjbprime|3 years ago
(Kidding. Thanks for Valgrind! I still use it for assessing memory corruption vulnerabilities along with ASan.)
galangalalgol|3 years ago
stormbrew|3 years ago
harry8|3 years ago
Tridge said words to the effect "if you program in C and you don't aren't using valgrind you flipping should be!" And went on to talk about how some projects like to have a "valgrind clean" build the same way they compile without warnings and that it's a really useful thing. As ever well expressed with examples from samba development.
He was obviously right and I started using valgrind right there in the lecture theatre. apt-get install is a beautiful thing.
He pronounced it val grind like the first part of "value" and "grind" as in grinding coffee beans. I haven't been able to change my pronunciation since then regardless of it being "wrong".
[1] https://lwn.net/Articles/132938/
Corbett's account of this is actually wrong in the lwn link above. Noted by akumria in the comments below it. Every single command and suggestion came from the audience, starting with telnetting to Ted Tso's bitkeeper ip & port that he made available for the demo. Typing help came from the audience as did using netcat and the entire nc command. The audience wrote the bitkeeper client in 2 minutes with tridge doing no more than encouraging, typing and pointing out the "tridge is a wizard reverse engineer who has used his powers for evil" Was clearly just some "wrong thinking." Linus claimed thereafter that Git was named after himself and not Tridge.
dtgriscom|3 years ago
glandium|3 years ago
hgs3|3 years ago
klyrs|3 years ago
opan|3 years ago
edit: val as in value + grinned
koolba|3 years ago
dietr1ch|3 years ago
RustyRussell|3 years ago
The response from Julian Seward was so nice it set a permanently high bar for me when random people I don't know report bugs on my projects!
We still run our entire testsuite under valgrind in CI. Amazing tool!
sealeck|3 years ago
nicoburns|3 years ago
j1elo|3 years ago
Running Helgrind or DRD (for threading issues) with GLib has been a bit frustrating, too. If anyone has some advice to share about this, I'm all ears!
(EDIT: I had mistakenly left out the phrase about suppression files)
nneonneo|3 years ago
The magic of Valgrind really lies in its ability to detect errors without recompiling the code. Sure, there’s a performance hit, but sometimes all you have is a binary. It’s damn solid on Linux, and works even with the custom threading library we use for the course; shame the macOS port is barely maintained (last I checked, it only worked on OSes from a few years back - anything more recent will execute syscalls during process startup that Valgrind doesn’t handle).
syockit|3 years ago
rigtorp|3 years ago
Olumde|3 years ago
Being a UK PhD holder, a sentence stood out out to me was a commentary/comparison between UK and US PhDs: "This was a three year UK PhD, rather than a brutal six-or-more year US PhD."
My cousin has a US PhD and judging from what he tells me. It is a lot more rigorous than UK PhDs.
wenc|3 years ago
The US PhD is usually 4-5 years after a 4 year bachelors (8-9 years). It is a little bit longer with more graduate-level coursework.
That said, the US bachelors starts at age 17 while a UK bachelors starts after 2 years of A-levels. So in terms of length it’s a wash.
not2b|3 years ago
t43562|3 years ago
I was able to build and debug them on Linux with Valgrind finding many stupid mistakes and the library worked flawlessly on Symbian.
It's just one of the many times that Valgrind has saved my bacon. It's awesome.
gkhartman|3 years ago
tarasglek|3 years ago
junon|3 years ago
Lovely piece of software toward which I owe a lot of gratitude.
mynegation|3 years ago
atgreen|3 years ago
Fun fact: the creator of Purify went on to found Netflix and is still their CEO.
cpeterso|3 years ago
https://en.m.wikipedia.org/wiki/BoundsChecker
hn_go_brrrrr|3 years ago
pjmlp|3 years ago
pjmlp|3 years ago
Just like Ada has been doing since 1983.
oconnor663|3 years ago
lma21|3 years ago
whimsicalism|3 years ago
And running in a container is not really a solution for most of these.
wyldfire|3 years ago
glandium|3 years ago
What Mac is definitely lacking, though, is reverse debugging. Linux has rr, Windows has Time Travel Debugging. macOS still doesn't have an equivalent.
amelius|3 years ago
Now imagine that you're developing a new application and you want to use some library, and it hasn't been tested with valgrind and generates tons of false messages. Should you then use it? Or look for an alternative library?
willfiveash|3 years ago
$ LD_PRELOAD=libumem.so.1 <executable filename>
I found a lot of memory corruption bugs using libumem in particular including some in MIT Kerberos that were severe enough to be considered security vulnerabilities. Sadly, Solaris is now in support mode thanks to Ellison and friends at Oracle.
ahartmetz|3 years ago
Memcheck decreases the memory safety problem of C++ by about 80% in my experience - it really is a big deal. The compiler-based tools that require recompiling every library used are a bit impractical for large stacks such as the ones under Qt-based GUI applications. Several libraries, several build systems. But I hear that they are popular for CI systems in large projects such as web browsers, which probably have dedicated CI developers. There are also some IME rare problems that these tools can find that Memcheck can't, which is due to information unavailable in compiled code. Still, Memcheck has the largest coverage by far.
Callgrind and Cachegrind give very precise, repeatable results, complementary to but not replacing perf and AMD / Intel tooling which use hardware performance counters. I tend to use all of them. They all work without recompiling.
randomswede|3 years ago
That was one of the more annoying tickets to file. We could of course send them the binary, but it would not run without the Purify license file, and we weren't comfortable to send off the license file as well. But, in the end, they accepted the bug. Not sure if there was every any fix, though.
mukundesh|3 years ago
Also used by SQLite in their performance measurement workflow(https://sqlite.org/cpu.html#performance_measurement)
edsiper2|3 years ago
What needs to be done to get Valgrind binaries available for MacOS (M1) ?, from a company perspective we are happy to support this work. If you know who's interest and can accomplish this pls drop me an email to eduardo at calyptia dot com.
bayindirh|3 years ago
Also, it has a nice integration with Eclipse which reflects the Valgrind memcheck output to the source files directly, enabling you to see where problems are rooted.
All in all, Valgrind is a great toolset.
P.S.: I was pronouncing Valgrind correctly! :)
vlmutolo|3 years ago
I'm pretty sure I've seen people using the icache/dcache miss counts from valgrind for profiling. I wonder how unreliable these numbers are.
andrewf|3 years ago
Cachegrind is used to measure performance because it gives answers that are repeatable to 7 or more significant digits. In comparison, actual (wall-clock) run times are scarcely repeatable beyond one significant digit [...] The high repeatability of cachegrind allows the SQLite developers to implement and measure "microoptimizations".
There's a bunch of ways for caches to behave differently but have they changed much over the past 20 years? i.e. is the difference between [2022 AMD cache, 2002 AMD cache] significantly greater than the difference between [2002 PowerPC G4 cache, 2002 AMD cache, 2002 Intel cache] ?
appleflaxen|3 years ago
tialaramex|3 years ago
https://github.com/tialaramex/leakdice (or https://github.com/tialaramex/leakdice-rust)
Leakdice implements some of Raymond Chen's "The poor man’s way of identifying memory leaks" for you. On Linux at least.
https://bytepointer.com/resources/old_new_thing/20050815_224...
All leakdice does is: You pick a running process which you own, leakdice picks a random heap page belonging to that process and shows you that page as hex + ASCII.
The Raymond Chen article explains why you might ever want to do this.
tux3|3 years ago
I'm also a fan of systemtap, for when your probing problems push into peeking at the kernel
yaantc|3 years ago
Also, the sanitizers for GCC and Clang (https://github.com/google/sanitizers), and the Clang static analyzer (and tidy too) through CodeChecker (https://codechecker.readthedocs.io/).
For the Clang static analyzer, make sure your LLVM toolchain has the Z3 support enabled (OK in Debian stable for example), and enable cross translation units (CTU) analysis too for better results.
cjbprime|3 years ago
amelius|3 years ago
It seems some packages (even basic ones) are not compatible with Valgrind, thereby spoiling the entire debugging experience.
Sesse__|3 years ago
unknown|3 years ago
[deleted]
unknown|3 years ago
[deleted]
anewpersonality|3 years ago
jackosdev|3 years ago
valgrind target/debug/rustbinary
==10173== HEAP SUMMARY:
==10173== in use at exit: 854,740 bytes in 175 blocks
==10173== total heap usage: 2,046 allocs, 1,871 frees, 3,072,309 bytes allocated
==10173==
==10173== LEAK SUMMARY:
==10173== definitely lost: 0 bytes in 0 blocks
==10173== indirectly lost: 0 bytes in 0 blocks
==10173== possibly lost: 1,175 bytes in 21 blocks
==10173== still reachable: 853,565 bytes in 154 blocks
==10173== suppressed: 0 bytes in 0 blocks
==10173== Rerun with --leak-check=full to see details of leaked memory
valgrind --tool=cachegrind target/debug/rustbinary
==146711==
==146711== I refs: 1,054,791,445
==146711== I1 misses: 11,038,023
==146711== LLi misses: 62,896
==146711== I1 miss rate: 1.05%
==146711== LLi miss rate: 0.01%
==146711==
==146711== D refs: 793,113,817 (368,907,959 rd + 424,205,858 wr)
==146711== D1 misses: 757,883 ( 535,230 rd + 222,653 wr)
==146711== LLd misses: 119,285 ( 49,251 rd + 70,034 wr)
==146711== D1 miss rate: 0.1% ( 0.1% + 0.1% )
==146711== LLd miss rate: 0.0% ( 0.0% + 0.0% )
==146711==
==146711== LL refs: 11,795,906 ( 11,573,253 rd + 222,653 wr)
==146711== LL misses: 182,181 ( 112,147 rd + 70,034 wr)
==146711== LL miss rate: 0.0% ( 0.0% + 0.0% )
rwmj|3 years ago
pjmlp|3 years ago
ssrs|3 years ago
Linda703|3 years ago
[deleted]
sharmin123|3 years ago
[deleted]