How do you "read" the Internet?
-
I use RSS feeds. A quick check of BBC and PhysicsWorld is usually ample. I wrote the reader program while learning about threading, socket programming and custom controls. Third most used program I have after the IDE and a browser.
enhzflep wrote:
I wrote the reader program while learning about threading, socket programming and custom controls.
Very cool! Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
But how would you filter it? Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out. Theoretically, a backprop routine could be trained to provide you with lists of pages that would be of interest, but that would probably take so long to train that your interests would change three times before it was finished. I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest. But, to avoid being bombarded with sites that pay the on-line giant, just use the newspaper method, and only "buy" the news sites that you like/trust/enjoy reading, or use some kind of crowd-sourcing/social-sharing/message-board solution, and only visit pages recommended by other individuals involved in the solution.
I wanna be a eunuchs developer! Pass me a bread knife!
-
English never does look right. It's a massive kludge!
-
But how would you filter it? Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out. Theoretically, a backprop routine could be trained to provide you with lists of pages that would be of interest, but that would probably take so long to train that your interests would change three times before it was finished. I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest. But, to avoid being bombarded with sites that pay the on-line giant, just use the newspaper method, and only "buy" the news sites that you like/trust/enjoy reading, or use some kind of crowd-sourcing/social-sharing/message-board solution, and only visit pages recommended by other individuals involved in the solution.
I wanna be a eunuchs developer! Pass me a bread knife!
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
No special software, but a list of sites that I browse over morning coffee and again in the middle of the day. For analysis, I read the Economist every week. (Not perfect, but useful). /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
If you would like to improve it, ban contentless sites. Or aggregators. My time isn't wasted reading sites or purusing search links. My time is wasted with click bait and blog posts that summarize, summaries of blog posts taking 5 clicks to get to the real author. Sadly, even CP is guilty of this. Sometimes the news in the news isn't the link but a link to some aggregator that does the link. And, AFAIK, CP has a full-time employee doing this! (Admittedly, it doesn't happen often so don't think I am calling out CP) Do you want great news filtered just for you ... pay someone. A web site would probably charge a few hundred a month for the privilege or you could just hire an intern at $10/hr to constantly give you good links : )
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Feedly is perpetually open on all my screens (about 200 feeds), and updated whenever I need a hit. When I'm scanning for news items, I tend to open all the 'usual suspects' in their own tabs. They get updated a couple of times a day, depending on how much of a panic I'm in for news items. I don't use them in Feedly as they tend to be way to noisy. By noon, I have Hacker News, Reddit/Programming and Reddit/Technology open and updating every 15 minutes (again, depending on what a panic I'm in). Dark ages? Maybe. I prefer to think of it as more like the industrial revolution: there are some labour savers, but on the whole it's a dirty, smelly business.
TTFN - Kent
-
Feedly is perpetually open on all my screens (about 200 feeds), and updated whenever I need a hit. When I'm scanning for news items, I tend to open all the 'usual suspects' in their own tabs. They get updated a couple of times a day, depending on how much of a panic I'm in for news items. I don't use them in Feedly as they tend to be way to noisy. By noon, I have Hacker News, Reddit/Programming and Reddit/Technology open and updating every 15 minutes (again, depending on what a panic I'm in). Dark ages? Maybe. I prefer to think of it as more like the industrial revolution: there are some labour savers, but on the whole it's a dirty, smelly business.
TTFN - Kent
Kent Sharkey wrote:
When I'm scanning for news items, I tend to open all the 'usual suspects' in their own tabs. They get updated a couple of times a day, depending on how much of a panic I'm in for news items. I don't use them in Feedly as they tend to be way to noisy.
Hmmm, you may be a good guinea pig for what I have in mind. Stay tuned. :) Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
I wanna read it!
Jeremy Falcon
-
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
I wanna read it!
Jeremy Falcon
Jeremy Falcon wrote:
I wanna read it!
:cool: I have a preliminary version comparing three NLP services here[^], but keep in mind it's preliminary -- I'm getting a lot of good feedback from each provider that I need to incorporate, which will resolve some of the "odd behaviors" I mention in the article at the end. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
Jeremy Falcon wrote:
I wanna read it!
:cool: I have a preliminary version comparing three NLP services here[^], but keep in mind it's preliminary -- I'm getting a lot of good feedback from each provider that I need to incorporate, which will resolve some of the "odd behaviors" I mention in the article at the end. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
I have a preliminary version
I won't tell anyone. :~
Jeremy Falcon
-
Marc Clifton wrote:
I have a preliminary version
I won't tell anyone. :~
Jeremy Falcon
Jeremy Falcon wrote:
I won't tell anyone.
:) Even if they find out, it doesn't matter. The repository is there so that AlchemyAPI, OpenCalais, and Semantria can give me feedback on how poorly I'm representing them. So far, Semantria is proving the most difficult to work with with regards to their API. Stuff is NOT clear. But check out their pricing. :omg: Who would want to touch that anyways? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
But how would you filter it? Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out. Theoretically, a backprop routine could be trained to provide you with lists of pages that would be of interest, but that would probably take so long to train that your interests would change three times before it was finished. I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest. But, to avoid being bombarded with sites that pay the on-line giant, just use the newspaper method, and only "buy" the news sites that you like/trust/enjoy reading, or use some kind of crowd-sourcing/social-sharing/message-board solution, and only visit pages recommended by other individuals involved in the solution.
I wanna be a eunuchs developer! Pass me a bread knife!
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
It can built without much difficulties (see my article on "query intelligence ..., etc.") for us if you want such a meta level filtering (+ keywords). It's just whether or not this is a justifiable effort (it does take a few days away at least and they do add up), given so much other "more important" things to handle now.
Find more in vertical search portal[^]. Email searcher Email Aggregation Manager[^].
-
I used to have iGoogle. From there I could read the news headlines. I haven't found anything like it that I like though, so I just stopped reading news altogether. I'm a happier person now (ignorance really is bliss!) :) As for CP I check the homepage to see what's new. And of course the Daily Insider :)
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
}I miss iGoogle, tried a couple of wannabes but they were nowhere as useful.
Never underestimate the power of human stupidity RAH
-
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Information on NLP you may find useless useful[^] I couldn't help myself. :-D
**_Once you lose your pride the rest is easy.
I would agree with you but then we both would be wrong._**
The report of my death was an exaggeration - Mark Twain Simply Elegant Designs JimmyRopes Designs
I'm on-line therefore I am. JimmyRopes -
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
I have a HTML file with list of my favorite links which is start page of my chrome. Occasionally I update that HTML file with new links. In CP, I visit Insider, Soapbox & Lounge regularly. And GIT too where you could find Nish mostly. Using this way, I save the typing time. And importantly searching & thinking time about sites as already I have those in my HTML file. Please inform me if you find any software for this to save more time on this.
thatraja
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
when it comes to using computers to filter out the crap
If that is your goal, I suggest printing it* out and using it** as toilet paper ;P *: the internet, that is :cool: **: the printout
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
-
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
I look forward to that one. Lots.
I wanna be a eunuchs developer! Pass me a bread knife!
-
Jeremy Falcon wrote:
I won't tell anyone.
:) Even if they find out, it doesn't matter. The repository is there so that AlchemyAPI, OpenCalais, and Semantria can give me feedback on how poorly I'm representing them. So far, Semantria is proving the most difficult to work with with regards to their API. Stuff is NOT clear. But check out their pricing. :omg: Who would want to touch that anyways? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
But check out their pricing.
Jeeze, those are laundering-drug-money prices.
I wanna be a eunuchs developer! Pass me a bread knife!
-
I miss iGoogle, tried a couple of wannabes but they were nowhere as useful.
Never underestimate the power of human stupidity RAH
What exactly is it that you guys miss about it? Not saying that I'll get around to it soon, but I've been thinking about building a similar thing for myself for a while. If others have use for it, all the better.