How do you "read" the Internet?
-
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Information on NLP you may find useless useful[^] I couldn't help myself. :-D
**_Once you lose your pride the rest is easy.
I would agree with you but then we both would be wrong._**
The report of my death was an exaggeration - Mark Twain Simply Elegant Designs JimmyRopes Designs
I'm on-line therefore I am. JimmyRopes -
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
I have a HTML file with list of my favorite links which is start page of my chrome. Occasionally I update that HTML file with new links. In CP, I visit Insider, Soapbox & Lounge regularly. And GIT too where you could find Nish mostly. Using this way, I save the typing time. And importantly searching & thinking time about sites as already I have those in my HTML file. Please inform me if you find any software for this to save more time on this.
thatraja
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
when it comes to using computers to filter out the crap
If that is your goal, I suggest printing it* out and using it** as toilet paper ;P *: the internet, that is :cool: **: the printout
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
I look forward to that one. Lots.
I wanna be a eunuchs developer! Pass me a bread knife!
-
Jeremy Falcon wrote:
I won't tell anyone.
:) Even if they find out, it doesn't matter. The repository is there so that AlchemyAPI, OpenCalais, and Semantria can give me feedback on how poorly I'm representing them. So far, Semantria is proving the most difficult to work with with regards to their API. Stuff is NOT clear. But check out their pricing. :omg: Who would want to touch that anyways? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
But check out their pricing.
Jeeze, those are laundering-drug-money prices.
I wanna be a eunuchs developer! Pass me a bread knife!
-
I miss iGoogle, tried a couple of wannabes but they were nowhere as useful.
Never underestimate the power of human stupidity RAH
What exactly is it that you guys miss about it? Not saying that I'll get around to it soon, but I've been thinking about building a similar thing for myself for a while. If others have use for it, all the better.
-
What exactly is it that you guys miss about it? Not saying that I'll get around to it soon, but I've been thinking about building a similar thing for myself for a while. If others have use for it, all the better.
Fast, easily configurable with plenty of sources and a good range of widgets. NO ADS.
Never underestimate the power of human stupidity RAH
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Multi-pronged, in order: 1. When I see a newsy website I like, I sign up for email notifications of new items. So, I use my email as the first source. 2. news.google.com is my home page. 3. Facebook for news from friends and relatives and sometimes other news that Facebook selects for me to look at. 4. Pearltrees - a browser add-on that I've installed on all of my browsers on all of my computers, allows me to quickly save links in categories and see them from any other browser or computer. I have one category for interesting news sites. 5. Click on the ads for news that I happen to notice in the margins.
-
I have a HTML file with list of my favorite links which is start page of my chrome. Occasionally I update that HTML file with new links. In CP, I visit Insider, Soapbox & Lounge regularly. And GIT too where you could find Nish mostly. Using this way, I save the typing time. And importantly searching & thinking time about sites as already I have those in my HTML file. Please inform me if you find any software for this to save more time on this.
thatraja
I really like Pearltrees.com. It allows me to quickly save links in categories for later browsing. I made one category for interesting news sites. I have occasionally tried your method, but I have too many browsers on too many computers. I have in the past put the links on a public page on the Internet, but it is always a nuisance to update. I have since installed Pearltrees on most of my browsers on all of my computers, and can see all of the links immediately on any other browser. The links are public (I use an anonymous alias). They even have a collaboration feature, in which a person may invite another to collaborate on a category. I have collaborated with strangers from all over the world on some.
-
Jeremy Falcon wrote:
I won't tell anyone.
:) Even if they find out, it doesn't matter. The repository is there so that AlchemyAPI, OpenCalais, and Semantria can give me feedback on how poorly I'm representing them. So far, Semantria is proving the most difficult to work with with regards to their API. Stuff is NOT clear. But check out their pricing. :omg: Who would want to touch that anyways? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
Who would want to touch that anyways?
When they go out of business, they'll figure it out. Starting doing some recon on this stuff man and it ties in perfectly with "Web 3.0". This stuff is crazy awesome.
Jeremy Falcon
-
Marc Clifton wrote:
Who would want to touch that anyways?
When they go out of business, they'll figure it out. Starting doing some recon on this stuff man and it ties in perfectly with "Web 3.0". This stuff is crazy awesome.
Jeremy Falcon
Jeremy Falcon wrote:
and it ties in perfectly with "Web 3.0". This stuff is crazy awesome.
Indeed it does and is! Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
DAILY: codeproject, geekstuff, makeuseof, sitepoint WEEKLY: Stackoverflow, Superuser AS NEEDED: Several Journals, BLOGs and tech support discussion sites. Hmmm... I guess I need to get a life or learn to hate code and problem solving! :mad:Facebook noway!
"Courtesy is the product of a mature, disciplined mind ... ridicule is lack of the same - DPM"
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs
When AI reaches the level that it can correctly assess my mood to guess what I might want to read then I am going to be much more excited about the real robots running around (since they won't require that level of AI.) Until then I will just have to continue to randomly and impulsively bumble throughout the day finding interesting stuff to read.
-
Marc Clifton wrote:
What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs
When AI reaches the level that it can correctly assess my mood to guess what I might want to read then I am going to be much more excited about the real robots running around (since they won't require that level of AI.) Until then I will just have to continue to randomly and impulsively bumble throughout the day finding interesting stuff to read.
jschell wrote:
When AI reaches the level that it can correctly assess my mood to guess what I might want to read then I am going to be much more excited about the real robots running around (since they won't require that level of AI.)
Agreed, but that's not the goal.
jschell wrote:
Until then I will just have to continue to randomly and impulsively bumble throughout the day finding interesting stuff to read.
The goal would be to provide you with more information than say, just the title of a post, to make your bumbling more efficient. :) Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
CodeProject.
You'll never get very far if all you do is follow instructions.