The player is loading ...
#16 - Alex Fink: Cutting Through the Noise of Online Politics
Apple Podcasts podcast player badge
Spotify podcast player badge
Castbox podcast player badge
Goodpods podcast player badge
JioSaavn podcast player badge
Overcast podcast player badge
Anghami podcast player badge
Fountain podcast player badge
Gaana podcast player badge
PlayerFM podcast player badge
Rumble podcast player badge
YouTube podcast player badge
RSS Feed podcast player badge
Apple Podcasts podcast player iconSpotify podcast player iconCastbox podcast player iconGoodpods podcast player iconJioSaavn podcast player iconOvercast podcast player iconAnghami podcast player iconFountain podcast player iconGaana podcast player iconPlayerFM podcast player iconRumble podcast player iconYouTube podcast player iconRSS Feed podcast player icon

Alex Fink, the CEO of OtherWeb, strives to make consuming news more nutritive by removing the junk, including clickbait and misinformation, from the content people consume, but faces the challenge of a relentless arms race against disinformation.

In this episode, you will be able to: 1. Discover how AI-driven tools are eliminating junk news and clickbait content to improve online experiences. 2. Learn about the significance of nutrition labels in evaluating the quality and informativeness of online articles. 3. Gain insights into tackling the ever-growing problem of disinformation and misinformation on social media platforms. 4. Understand the implications of China's superpower ambitions and the US's perspective on these developments. 5. Explore OtherWeb's customizable news feed and its potential for shaping the future of online news consumption.



This podcast uses the following third-party services for analysis:

Chartable - https://chartable.com/privacy
Podcorn - https://podcorn.com/privacy

00:00:00 Welcome back to the Purple political podcast. I'm your host for Radell Lewis, and we are back to discuss some of the most interesting conversations and find solutions on these interesting topics. Today we'll be doing episode number 16, and we're going to be talking about how the Internet is influencing the people with junk question mark, what that means, and is it damaging to society. Before we dive into the meat and. Potatoes of the conversation, I'm going to read off a review, make sure you leave one so you can be shouted out on each and every episode. 00:00:32 The review says here there is a need for a platform to present voices from not just both extreme sides, but actually from the middle. I like this approach and enjoy the guest deep understanding of their subject. Not every topic is for me, but there is clearly a lot of effort put into this podcast and presenting irrelevant opinions and information with a more middle ground approach. So I appreciate that and that's what I tried to do here in every. Conversation that I have. 00:01:01 Also, I want to go over some current trending news to kind of keep. Everybody up to date on what's going on in society. This trending news is related to China. Our best friends, right? No, they're not our best friends. 00:01:14 So China is kind of peeved at us as we have another person of another politician talking to the Taiwanese president. This time it was the speaker, Kevin. McCarthy, as he plans to meet with the Taiwanese president. It doesn't help that recently for Taiwan, honduras has cut ties with Taiwan, not recognizing them as an individual independent country from China. China really wants Taiwan, and China is really reinforcing this idea as they make relations with Russia even more so meeting with Putin, signing a lot of documents. 00:01:46 The expectation for them seems to be establishing their own superpower and potentially their own currency. They want to walk away from the United States and really punch us in the face. 00:01:58 They really want to one up us. So it's not really looking good. If Russia wants to take Ukraine, China wants to take Taiwan. All america do. We shall see. 00:02:08 So that is the current trading news. I'm going to wait for our guest. Let me see if he's here. So in today's episode, I have my guest here, Alex Fink, and I'm going to let him introduce himself and tell the people what he's about. Everyone. 00:02:26 My name is Alex Fink. I'm the founder and CEO of a company called otherWeb. And what we do is we try to clean the junk out of news and other types of information that people consume, essentially to help people consume higher quality information. All right, so I guess the first thing to really go into in that is, what do you mean by junk? So there's different kinds of information that is essentially not good for you or lacks any kind of nutritive value. 00:02:59 We start from the low hanging fruit. So it's those times where you open Google News and it shows you an article from CNN with a title like stop what you're Doing and Watch this elephant play with bubbles. That is obviously not news, but CNN tends to post stuff like this and so do most other serious publishers these days. So that's one really obvious type of joke. It's just not news. 00:03:24 It shouldn't be there. They just post it to get clicks. There's clickbait, which is when the article does have some substance, but the headline doesn't match the content. Right. It's just there to attract attention. 00:03:37 That's another type of maybe something that is less junk, but it's still junk. And then you sort of go up the junk totem pole and you get to things like real misinformation or disinformation that is harder to detect. We still have some filters that detect known propaganda techniques, but obviously that's an arms race. We can never catch everything. So we do our best to start from the things that are really obvious and work our way up. 00:04:06 Okay, well, I mean, it seems pretty obvious people don't like that stuff with the random article. They're trying to make you click on it and the clickbait, which will be very interesting considering how much clickbait affects people on a consistent basis. In terms of that, for the most. Part, I'm curious, I don't know if you'll be able to explain it in layman terms, but how do you really accomplish something like that? In addition to that, how do you accomplish something like that while also a lot of people use those things as monetization. 00:04:52 So monetize yourself as well, or keep some type of monetization while doing that and keeping all these. These are so for the most part, ads off your website, right? So I think actually it's worth touching upon ads because I think they are what caused the proliferation of things like clickbait in the first place. Right? Because most ads pay per click or per view and most content is monetized using ads, it creates a single selective pressure on all content to maximize clicks and views. 00:05:26 There's no paper quality or paper truth or anything like that. And so over time, what we've seen over the past 20 years, since pay per click was introduced with Google acquiring AdWords, we've seen all content drift in that direction. And I'm pretty sure the CNN of 20 years ago wouldn't post an article about an elephant blowing bubbles, but today they do because it generates clicks, right? Today Forbes has articles rating protein powders. They have a top ten list of best protein powders on the market. 00:05:56 I'm pretty sure Forbes 20 years ago didn't post an article. Right. They're essentially piggybacking on their authority on Google to drive clicks towards anything that generates them. So that is what caused the problem. And to undo that, I think we need to create an incentive for the entire ecosystem to maximize quality instead of just maximizing clicks and view. 00:06:19 Now, taking a step back, how do we define something like this? We've essentially tried our best to define small, easy to understand factors that are a negative trait that an article might have. So Clickbait is one of them. It's pretty well defined. Clickbait is when the headline doesn't match the body of the article, and it is written in a language that is there to attract attention. 00:06:43 If you give an article to three people and you ask them, is this Clickbait or not? Usually all three will agree. You have some extreme examples that are extreme right or extreme left, and then there might be some disagreements. But for the most part, people tend to agree on what is Clickbait. So what we did as an AI company is we trained a model that essentially emulates what the humans did. 00:07:09 So you ask the humans to annotate 10,000 articles. Then you start feeding that annotated data set to an AI model and teaching it just as you would teach a child walking down the street. Dog, cat, dog, cat. After a while, the child can tell whether the animal it's looking at is a dog or a cat. So we train AI in the same supervised way. 00:07:31 It's pretty different from what GPT Four does, for example, which is a large unsupervised model that ingests the entire web, right? We have very well defined, small tasks that we teach our AI to do. We have about 20 of those models, each one trying to detect something small and well defined. And then we also have our own ways to estimate an aggregate score by essentially adding up the 20 results of these models. But once you do something like that, then it really becomes context dependent. 00:08:04 So if you select the weights for these 20 models and how you calculate the aggregate score for news articles, then that same formula does not work anymore for commentary or for opinion articles, because the factors are different. Our news scores tend to really punish opinion nativeness because it shouldn't be in the news. So that's kind of our approach to things. It's not like we can claim that it's scientifically valid, but the way we've tested it, people tend to agree with it, right? So our entire position is we want people, after they read the entire article in detail, to tell us whether intuitively they agree with our scoring or not. 00:08:48 And if they do, that means they can now trust our scoring before they start reading the article. Like when you look at the nutrition label on a food packaging, let's say you're doing that because you don't want to eat this to figure out if it's good for you. You want to look at the label to figure out if it's good for you before you eat this. We want people to have the exact same experience when they look at an article. So we actually have a nutrition label next to each article with all the outputs of our models so they can look at it and figure out whether they want to read it or not. 00:09:20 All right. Honestly, it sounds very interesting, and I could definitely see how it would work. But as you said earlier, the more tougher part of moving forward is misinformation and disinformation. So before we dive into that, can you give a definition for both misinformation and disinformation? So, misinformation is actually a pretty loosely defined term, so let me start with the easier one. 00:09:45 Disinformation is when somebody intentionally tries to convince people of something that isn't true. So foreign propaganda is a good example of disinformation. Right. Somebody intentionally tries to disinform. Misinformation is a term that only appeared pretty recently, and it essentially applies to something that is obviously not true, but is reported. 00:10:08 But very often people just use that term to describe the other side of the argument that they disagree with. So that's why I'm kind of cautious about looking at it this way. And I tend to just speak about junk. And whether you agree with it or not, if the argument is laid out correctly, I consider it a valid argument for the most part. There is a third term that we see a lot these days, and it's fake news. 00:10:34 And this one is also loosely defined, I would say. Yeah, if you define it correctly, it should be the same as disinformation. But people use it much more broadly than that and try to paint anything they disagree with as fake news. Whereas in reality, most of these things are sometimes they're erroneous just because breaking news tends to be wrong a lot. Right. 00:10:56 And this is something people also don't realize that news has a time value. Right? Right. If an event is unfolding, then a week later you might write something accurate about it. An hour later, what you're writing is at best 60 or 70% accurate. 00:11:12 So that's why these terms, you have to be really careful with them. And we really prefer to look at things in terms of nutritive value. Junk, not junk. Just as you look at the food. And you might disagree whether keto is better, plant based is better, mediterranean is better. 00:11:29 We all agree that just eating bowls of sugar is not good, right? Correct. So let's start by filtering that stuff out. Okay. I can definitely see where you're coming from in that situation, and it would be very hard to when you get into that weeds, especially when you have very different political opinions. 00:11:48 Like recently I was going through a huge long winded Twitter thread, and these people were attacking COVID, attacking how social media handled COVID and their own opinions on misinformation, disinformation, and mal information and saying whether or not we should take this out because it could be harmful to the public and all that stuff. In those situations. Those are so hard to define because you're going to have the people like Adam Lee going against it unless you are spearing out utter facts that cannot be argued, which hard for some people. Right. So in your situation, for the misinformation disinformation, do you take into account the political ideology in the information that's being presented or do you look plainly at the facts that is being presented? 00:12:45 We are actually looking plainly at the language being used in the article as it's written. So we ignore the source for the most part. I mean, there is source based filtering in that if a source is just consistently bad, we won't even sample it. Right? We won't crawl that website. 00:13:00 So at that level there is source based filtering, but everything that makes it into the platform. We don't actually use the name of the source and the scoring at all because we think the article should speak for itself. So Fox News has some bad articles, some good articles. CNN has some good articles, some bad articles. We're not going to score one of those higher than the other just because of the domain that we picked the article from. 00:13:22 We're going to look at the actual article after we do. I can tell you on average how much each of those scores right, but the score itself was not influenced by the domain name. That's one element. Now, the other one in what you mentioned is political ideology. And we've actually developed a pretty accurate model to look at the text of the article and classify who wrote it, whether it's right wing or left wing. 00:13:46 And then we decided not to deploy it and not to show the result to anyone because our experience is that even if you just show people this classification, they will start arguing and getting upset that you just called this center right, but it's extreme right, how dare you? Or you call this extreme right, but it's centrist. How dare you? Right. So we decided there's no good will come out of it. 00:14:11 These labels, right and left, they just anger people and they serve no informative value at this point. So we ignore it completely. We try to sample from sources that are considered right wing and sources that are considered left wing, but we don't classify politically and we don't use that information in our evaluation. All right, that's very interesting, very much. I can definitely see how easily people will get triggered just from the classification. 00:14:39 Definitely could see that. So I want to take a step back and kind of go over the problem of junk, as you say it. Again, how do you believe that this problem started and when do you think it became a real big problem for society? So we've been getting and going in waves with this problem in reality. So the first time you see this problem appearing is after the invention of the printing press in the 1430s right? 00:15:08 And then there's a bunch of junk books being printed and some of those books cause really bad things to happen. There's one book called The Hammer of Evil that launched the witch hunt in the middle Ages and probably 80,000 women were burnt at the stake because of the single book, right? And so you have this period of 52 religious wars, inquisitions, witch hunts and then you have the invention of the scientific method, peer review, the enlightenment and so 200 years transition period, then everything is good for a while. And then you have the invention of the daily newspaper. That is basically when clickbait appeared for the first time. 00:15:48 If you look at newspapers before the 1890s, it seems like they were mostly kind of partisan advocacy, I guess. But it wasn't Clickbait. They weren't trying to attract attention, they were mostly appearing to subscribers. But then from the end of the 19th century you suddenly see something that looks a lot like clickbait, except that a kid in a corner yelling special, read all about it. Right? 00:16:12 Then that lasts for a while. And now you have a good period with subscription based newspapers from the 1930s onward until roughly the late ninety s. And then the late 90s it breaks again. And now we probably have the worst clickbait period of our lifetimes or in history with the invention of the internet. Because now not only does every newspaper fight against every other newspaper for attention, but every single article stands for itself and fights against every single article. 00:16:44 So whereas the daily newspaper of the 1890s or the beginning of the 20th century it needed to have one clickbait headline and the rest was journalism, today there is no the rest. Every article has to be something that attracts attention in its own right. And so the pressure to produce this kind of junk that just attracts eyeballs is higher than it's ever been. And our ways of tracking the performance of an article are better than they've ever been. Which means people get to calibrate their junk creation really efficiently. 00:17:22 Right? The worst thing is that it's not like the bad guys are doing it and the good guys are not doing it. Right? Once the bad guys are doing it, the good guys have to compete with the bad guys. Right? 00:17:33 And so once BuzzFeed started printing articles with a single anonymous source, that's it. Nobody has the time to double source anymore. And so now you see the New York Times publishing articles with a single anonymous source. Right? Because if they were to double source, by the time they break the story, nobody will read it. 00:17:52 Everybody will already know it. So all journalistic standards that people learn in journalism school still are essentially out the window right now. Nobody has time for it. That's very interesting. And I mean a large part, in part with current society, obviously everything is very much intertwined with how capitalism works. 00:18:12 You give people what you want. And now with the introduction of Internet, the concept of virality is so essential to people to get their news story out. And as of right now, a lot of people tend to get a lot of their information from social media, from Twitter, from Facebook. Does your platform or your AI or any strategies that you have implemented attack the information that people try to get from social media and the news and the facts that they may try to get from there? Do you have anything that kind of attacks that potential problem? 00:19:02 So I wouldn't say we're attacking it. We're just creating an alternative walled garden, you could say, that doesn't have these kind of fair enough. So we're starting from aggregating news, but we're also aggregating lots of different types of information. We have commentary, we have podcasts, we have research studies, wikipedia, online courses, everything in one place. And the idea is you start, or most people start by just reading their daily news feed. 00:19:26 And unlike the social media platforms, we're not trying to maximize your time on site. In fact, we ask you when you sign up, how many articles do you typically read per day? And if you tell us 50, then once you get to 50 in a given day, we're going to show you, congratulations, you've read enough news for today kind of screen, so you at least have an opportunity to think about whether or not you want to continue. If you do, we're not going to stop you. But we don't have the goal of keeping you there for an hour and a half per day like TikTok does, because we think we should deliver as much value in as little time as possible, as opposed to parking you there for as long as possible. 00:20:06 It's a different goal. Now, another kind of thought that we have, and it's still a hypothesis, I don't know what percentage of the population thinks this way, but I know I certainly do. If I read an article and I see a term I want to know more about, I want to know more about it right now. Opening another window and typing something into the address bar, clicking, selecting a result from Google, that seems like a really odd process, a much more natural process is highlight the text that interests me and dive deeper into that. And so we've implemented that, and that's why all these other sources are in the same place. 00:20:41 So that if you highlight a term that's more of a scientific concept, maybe you want to search research study. If you highlighted the name of a person you don't know, maybe you want to search Wikipedia. We allow you to navigate information, two dimensional grid without leaving the platform. And again, the hope is you get as much value as possible in as little time as possible. And you mentioned monetization before, so we haven't started monetizing yet. 00:21:08 For now, it's free and ad free. Obviously we can't keep going like this forever. But the idea that we have right now, the first type of monetization that we're going to introduce and hopefully it will be enough, is to place ads on the search results when you use this kind of dive deeper approach. Because we see that the average user we have right now does this between two and three times a day when they read something and they highlight text and dive deeper. So if that is a moment where you get search results, search results, they mean that you're about to click something you might as well give you an ad that is also relevant to your result. 00:21:47 People's feeds. I am very much against placing ads on that ever. Hopefully I can hold the line for as long as I can. But my idea is if you're reading a feed you don't want to click, you want to read. So why would they place an ad there that's just disrespectful? 00:22:04 Definitely get that. So I want to do an exercise that I do for every kind of topic that has a particular solution or something that they want to advocate for. So for your program and it seems very progressive in the nature of trying to font getting out information that is rid of all the junk, right? That seems obviously for someone who likes to do research and who hates clickbait and false information and people just put in their opinions on everything, that would be very useful. But the exercise basically here is can you elicit a positive, obviously your main positive on this platform moving forward and a potential negative that it could have for people and society moving forward if they have this platform that's alleviating, all this junk. 00:23:04 Can you elicit a positive and a negative? Okay, so first of all, let me talk about the positive externality that we are aiming for because the idea is we don't just want to benefit the people that use the platform. Obviously the platform needs to benefit the users, right? But my belief and my hope is that if we show to the world that people value quality then all the other platforms will also pursue this goal because it pays. They will see that it pays. 00:23:34 So you can think of this as Whole Foods introduced local, organic, pasture raised all of that stuff to the world. Now Walmart carries pasture raised eggs and organic meat, right? Why? Because somebody showed that it pays. So that is the externality I'm after. 00:23:50 The other externality I'm after is that if we filter a substantial amount of clickbait and we grow large enough to actually affect the incentives for the content creators, maybe people will start creating less clickbait because it won't pay if more and more people filter it out. So that is the real externality we want to have. We want to affect not just the consumption side but also the distribution and the creation side in terms of negative externalities. Obviously I'm not trying to create any. So it's kind of an interesting exercise just to think of it. 00:24:28 I guess the worst thing that can happen is that like most companies, we will start with one mission and then that mission will get corrupted as soon as they're investors or shareholders or other kind of people involved because, well, that creates a conflict of interest, right? Google started as wanting to organize the world's information the moment that that information gets monetized with advertising. Now they suddenly have a trade off between showing people the best information or the information that is most likely to get them to click on ads. It's a hard trade off to square. Right? 00:25:08 So we're trying to address that. We registered ourselves as a public benefit corporation specifically to not have shareholder value as the only thing that guides us, right? We put improving the quality of information that people consume on par with that. And we are also opening our models so that they're all source available to make sure that even if one day I'm not the CEO and whoever replaces me decides, why don't we hide some bias into these models to try to sell influence to the highest bidder, right? I want to make sure that's impossible because our models are open, right? 00:25:47 So that's our way to try to preclude the negative externalities. They might still happen. The moment that people start trusting us with helping them evaluate information or select information, we might mess up, right? And so we're trying our best not to do that, but maybe we will influence what people consume in the wrong way. Maybe our idea about quality isn't correct. 00:26:11 Maybe all these hypotheses that I was talking to you about what junk is, maybe they're wrong. I don't know. I hope not, right? Yeah. I'm very much with you. 00:26:21 And a lot of the times. 00:26:25 Overall. A lot of these new inventions and new ideas end up a net positive for society as a whole. But then something unexpected happens and then creates a new problem for society and then we have to just adapt to that situation. So it's always important, in my opinion, to always kind of consider the possible contingents that may happen. Right? 00:26:49 So to kind of go over your site for people who want to go on there. 00:26:57 Let me say I have. Three type of examples. One is like a student who wants to do research for some type of paper. Doesn't really matter what, maybe let's say a political paper. Another is just the casual person who kind of wants to figure out what's going on with the news and with politics. 00:27:16 And the third is someone who is a content creator. Nowadays there's a lot of people who create content as an individuals and do their own research and want to be as correct as possible when they tell other people. So how do these people go on your site and use the other web website and use the AI technology. How do they use all this stuff? So let me start from the second type of person because that's sort of the normal target audience, right? 00:27:48 We envision that the average user that we have just wants to get their daily dose of the news and occasionally they see something interesting and they dive deeper into it and probably happens two or three times per day is what we're seeing in the data. Right? So there are several different ways that you can consume your daily dose, right? You can do it in Android and iOS apps. We actually rolled out a new design last week that makes it, I think much more pleasant. 00:28:13 In fact, my personal opinion is that our apps right now are much better than the website and we hope the website catches up soon. But right now they just have completely different design. You can do it in the website itself otherweb.com or we actually have a daily newsletter as well. It's experimental for now but we will have a link to it from the main website very soon. And there we select my sign up. 00:28:36 Not going to lie. Yeah. There we select the ten best articles or most important articles of the day that were already pre filtered by the engine but then selected by a human to cover kind of all the major things. A little bit of tech, a little bit of politics so you get a balanced diet. So that would be the way that a normal person I think would consume it. 00:28:56 I should mention there's also commentary section so if you do like opinion articles, they're there. I personally almost never read them, but I know that a lot of people do. Now as for the student, I am not sure we are the best tool for the student to do research at. So we have the ability to search for a bunch of different information sources in one place if that's something that benefits the topic that the student is researching. So it helps the student have Wikipedia and all the unpaid research studies. 00:29:29 We have DOAJ, OSF and Unpawed as our three sources for research studies and online courses and books all in one place. Great. Then they can just come and use our website as a search engine. Actually we have our own search engine too. It works kind of like Bing but with extra filters where you can disable certain types of results that you don't want to see. 00:29:51 I don't know if that's the best use for them, but it's certainly a viable use case for them. There are other resources that are more targeted towards students. And then the third type that you mentioned, remind me what it was. Those like content creators or just individual entrepreneurs who want to do their own personal research to kind of eventually spread that information, their own opinion. To other people. 00:30:20 So I would say in that case, it's not very different from the first type of person. In our case, the benefit to the person that you're describing is that we're trying to make everything very customizable and the user controls exactly what they see in their feed. So unlike with other news sources, you can come in and say, I want way more business than anybody else, I want way more tech and I want to disable entertainment and sports completely from my feet, for example, because that's what an entrepreneur would often do. Moreover, now that we are rolling out more granular customization per topic, we have internally identified a little over 200 topics and we're trying over time to figure out what is your preference for each one. But unlike other platforms, once we learn your preferences, you can actually go in, see what preferences we have inferred about you, and change them manually if you want to. 00:31:14 So you will see that for some reason we decided you don't like electric cars. But you know that you do. You can just move a slider and say show me more electric cars. Right? And then maybe an entrepreneur, if he or she is into green energy, they will just crank that knob up. 00:31:31 And if that entrepreneur doesn't care about biological research because that's not their field, they will lower that down and ultimately they will get a feed that is customized to what they want to know the latest news about and eliminate everything that is not useful to them in their daily life. All right, excellent, excellent. So definitely sounds useful. Another thing I want to ask about, you mentioned a little bit in terms of the different categories of information that you could find sports, entertainment, politics, science, all that stuff. Does it also cover a lot more niche things or will it potentially cover a lot more niche things? 00:32:13 For example, people like video games, people like anime, people like comics, people like horror stories specifically, or crime stories, right? So a lot more things that are niche, that can also have clickbait problems, junk problems, all that stuff. So will your platform, does it currently have it or will it potentially have it in the future? So currently I don't think we have it, unless you're talking about an article about that topic, then we might have it, but we don't have any of what you described directly. We could potentially include some of these sources as well. 00:32:55 But to filter out the junk in a source like this, we have to really understand it and really understand what junk is in that particular type of content. So it's a bit like localization in some sense. It's like adding a new language and so we plan to add many languages, but for now it's English only. Likewise, we plan to add many different types of content, but for now it's pretty news and commentary heavy. So it's going to take us time. 00:33:23 We're still small and I would say almost bootstrap. I was the initial investor and founder. There is one external investor and now we're raising money on we funder. So if you go on wefunder.com otherWeb. You can invest in the company as well. 00:33:39 And we raised a little over 200K already that way. But that still means we have very limited resources to start adding experimental types of content. As time goes by, I hope to add the entire Internet and filter the junk out of all of it because God knows we need it, right? So my final question before we start wrapping things up kind of goes into a different type of source of information. We talked a lot about articles, of course, but I know you said you're at a certain place right now, but I might as well ask this if you plan to have this as well in the future. 00:34:21 So what do you think about videos in terms of clickbait and misinformation disinformation? Do you potentially plan to tackle those in terms of filtering them out for people to watch and gather information, or are you going to stick mostly to more written material? We absolutely want to go to audio and to video. The problem for us right now is that it's just expensive. The amount of data that you have to go through in a video is much larger for the same amount of information load essentially, than it is for text. 00:35:00 And so our budgets right now are just too low to do that. But technically it's not that much harder. So the moment we raise a larger investment round, podcast and YouTube videos and things like that are immediately coming on the platform again. You do have to retrain the models a little bit because the type of language that people use when they speak is different than the type of language they use when they write. And so to figure out what is junk at the level of spoken language, you need to retrain the models by giving them a lot of annotated data transcripts of spoken language, essentially. 00:35:39 Right. But it's not a difficult problem. It doesn't require any tech that we don't already have. All it requires is an extra zero in our AWS budget, essentially. All right, yeah, I definitely understand a lot of great information and honestly, I could definitely see the future potential and the value of this technology. 00:36:02 Do you have any final words you want to say before we wrap things up? I guess the one note I have is we talked a lot about systems and software and solutions, but I think ultimately we need people to care about what they put into their brain, otherwise nobody will use that stuff. Right. And so my suggestion to anybody listening to this is if you pay attention to what you eat, I would recommend you also pay attention to what you consume information wise, because it's just as important, maybe more so. And if we know that bad food makes us unhealthy, bad information makes us mentally unhealthy too. 00:36:41 Yeah, very much agree to that. Of course, at the end of the. Day, people, everybody here wants to be smarter and wants to be wiser and wants to win every argument against your friend. So get the best information possible to do so. And from what it sounds like with other web, it can provide you important, concise, factual information with all that junk. 00:37:05 So with that said, I hope you. Guys enjoyed today's podcast episode. I thought it was very interesting and very insightful. Of course, you could check out his information on my guest page and you could check out the links. Obviously, you said that you could donate, become an investor, help that get that extra zero, of course, so the website can continuously evolve. 00:37:26 Rate this podcast episode five stars, of course, and leave a review. Hope you guys enjoy. Y'all have a good one. Take care and peace.

Alex Fink Profile Photo

CEO

Alex Fink is a Tech Executive, Silicon Valley Expat, and the Founder and CEO of the Otherweb, a Public Benefit Corporation that (among other things) generates a “nutrition label” for media content so people can be more informed about the content they consume online.The Otherweb is also available as an app, a website, a newsletter, or a standalone browser extension.

After a long career in Silicon Valley as a tech executive in a variety of startups, Alex decided that instead of contributing to the “Social Dilemma” (a problem created by Silicon Valley with the advent of social media in which clickbait is incentivized), he would rather build the solution using their own methods. He moved to Austin, rolled up his sleeves, and a few years later the Otherweb was born.