How do you "read" the Internet?
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
5 REM CODEPROJECT FIRST!
7 GOTO 30
10 READ GMAIL
20 READ BBC
30 READ CODEPROJECT
40 GOTO 10
50 END -
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
I usually start in *cough* Yahoo, which redirects me on real websites, where I can read news. (In case you wonder: Yahoo, because I have a private mail account there that I opened in 2001). No active social media reading. No special software ( was not aware that something like that exists, but hey, there is an app for everything today). I use my tablet a lot while watching TV : to get answers to games before the contestants, to check for actor bio, to check for movie reviews or read about the start if I catch up after it started, to follow live tweets (especially on soccer or special live broadcasts), to read some news items I usually receive via newsletters in my mail. Midday routine : www.viedemerde.fr[^], xkcd (and what-if), which usually has me surf on news or scientific articles. Night routine : www.imgur.com[^], www.alt-tab.com[^] Workday routine : CP !
~RaGE();
I think words like 'destiny' are a way of trying to find order where none exists. - Christian Graus Entropy isn't what it used to.
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
I used to have iGoogle. From there I could read the news headlines. I haven't found anything like it that I like though, so I just stopped reading news altogether. I'm a happier person now (ignorance really is bliss!) :) As for CP I check the homepage to see what's new. And of course the Daily Insider :)
It's an OO world.
public class SanderRossel : Lazy<Person>
{
public void DoWork()
{
throw new NotSupportedException();
}
} -
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Mostly I go directly to the sites I'm interested in. I'd estimate that's 70-85% of my reading total. RSS feeds to low volume (generally once/day or less) sites that tend to have interesting content is maybe 5%. Stuff sent by friends/linked on forums/etc makes up the rest.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, waging all things in the balance of reason? Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful? --Zachris Topelius Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies. -- Sarah Hoyt
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
Well, I have a couple of papers I read online - one local and one national. I also read the CP news digests and the BBC. Beyond that, I have Flipboard set up just the way I like it to aggregate things for me.
Pete O'Hanlon wrote:
I have Flipboard set up just the way I like it to aggregate things for me.
Interesting -- my new phone wanted me to set up flipboard, it seemed too invasive. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
In this context: 'used' :)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
I use RSS feeds. A quick check of BBC and PhysicsWorld is usually ample. I wrote the reader program while learning about threading, socket programming and custom controls. Third most used program I have after the IDE and a browser.
enhzflep wrote:
I wrote the reader program while learning about threading, socket programming and custom controls.
Very cool! Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
But how would you filter it? Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out. Theoretically, a backprop routine could be trained to provide you with lists of pages that would be of interest, but that would probably take so long to train that your interests would change three times before it was finished. I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest. But, to avoid being bombarded with sites that pay the on-line giant, just use the newspaper method, and only "buy" the news sites that you like/trust/enjoy reading, or use some kind of crowd-sourcing/social-sharing/message-board solution, and only visit pages recommended by other individuals involved in the solution.
I wanna be a eunuchs developer! Pass me a bread knife!
-
English never does look right. It's a massive kludge!
-
But how would you filter it? Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out. Theoretically, a backprop routine could be trained to provide you with lists of pages that would be of interest, but that would probably take so long to train that your interests would change three times before it was finished. I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest. But, to avoid being bombarded with sites that pay the on-line giant, just use the newspaper method, and only "buy" the news sites that you like/trust/enjoy reading, or use some kind of crowd-sourcing/social-sharing/message-board solution, and only visit pages recommended by other individuals involved in the solution.
I wanna be a eunuchs developer! Pass me a bread knife!
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
No special software, but a list of sites that I browse over morning coffee and again in the middle of the day. For analysis, I read the Economist every week. (Not perfect, but useful). /ravi
My new year resolution: 2048 x 1536 Home | Articles | My .NET bits | Freeware ravib(at)ravib(dot)com
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
If you would like to improve it, ban contentless sites. Or aggregators. My time isn't wasted reading sites or purusing search links. My time is wasted with click bait and blog posts that summarize, summaries of blog posts taking 5 clicks to get to the real author. Sadly, even CP is guilty of this. Sometimes the news in the news isn't the link but a link to some aggregator that does the link. And, AFAIK, CP has a full-time employee doing this! (Admittedly, it doesn't happen often so don't think I am calling out CP) Do you want great news filtered just for you ... pay someone. A web site would probably charge a few hundred a month for the privilege or you could just hire an intern at $10/hr to constantly give you good links : )
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost "All users always want Excel" --Ennis Lynch
-
For example: If you want to peruse the news in the morning, do you just go to news.google.com (or whatever your favorite website(s) is)? Do you just go to the Code Project home page and see what's new? Do you just scroll through social media and forum posts until you find something amusing or interesting to actually read? In other words, do you use any special software (not that any actually exists, methinks) to do any preprocessing so you don't have to spend all that time bouncing between websites to see if anything is of interest? Yes, there's feed readers, but how many people actually use them or set up triggers for keywords or, say, a post by your favorite authors? What I'm getting at is, it seems like we're still in the stone ages when it comes to using computers to filter out the crap and alert us to when something that we have said we're actually interested in occurs. Is that not the case? So I ask you, how time consuming is your "process" of perusing information on the Internet, that you do every day as part of your routine, and how do you think that could be improved? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Feedly is perpetually open on all my screens (about 200 feeds), and updated whenever I need a hit. When I'm scanning for news items, I tend to open all the 'usual suspects' in their own tabs. They get updated a couple of times a day, depending on how much of a panic I'm in for news items. I don't use them in Feedly as they tend to be way to noisy. By noon, I have Hacker News, Reddit/Programming and Reddit/Technology open and updating every 15 minutes (again, depending on what a panic I'm in). Dark ages? Maybe. I prefer to think of it as more like the industrial revolution: there are some labour savers, but on the whole it's a dirty, smelly business.
TTFN - Kent
-
Feedly is perpetually open on all my screens (about 200 feeds), and updated whenever I need a hit. When I'm scanning for news items, I tend to open all the 'usual suspects' in their own tabs. They get updated a couple of times a day, depending on how much of a panic I'm in for news items. I don't use them in Feedly as they tend to be way to noisy. By noon, I have Hacker News, Reddit/Programming and Reddit/Technology open and updating every 15 minutes (again, depending on what a panic I'm in). Dark ages? Maybe. I prefer to think of it as more like the industrial revolution: there are some labour savers, but on the whole it's a dirty, smelly business.
TTFN - Kent
Kent Sharkey wrote:
When I'm scanning for news items, I tend to open all the 'usual suspects' in their own tabs. They get updated a couple of times a day, depending on how much of a panic I'm in for news items. I don't use them in Feedly as they tend to be way to noisy.
Hmmm, you may be a good guinea pig for what I have in mind. Stay tuned. :) Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
Mark_Wallace wrote:
But how would you filter it?
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
Mark_Wallace wrote:
Keywords wouldn't be much use, because they would restrict you to a few topics (half of which you're probably not interested in), and would have to be updated/added to so much that they'd end up filtering nothing out.
True, and even with NLP, one would have to set up triggers of entities, concepts, etc.
Mark_Wallace wrote:
I can't really see a locally-installed app being able to deliver "pages that will be of interest to Markie" (or perhaps "Marcie", in your case), so maybe it would have to be down to some on-line giant to deliver pages-that-might-be-of-interest.
Not necessarily -- give it some RSS feeds, have the app know how to read through Facebook/Twitter/whatever, etc. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
I wanna read it!
Jeremy Falcon
-
Marc Clifton wrote:
NLP (Natural Language Processing). Extracts the semantic meaning of the content. I'm putting together an article on that at the moment.
I wanna read it!
Jeremy Falcon
Jeremy Falcon wrote:
I wanna read it!
:cool: I have a preliminary version comparing three NLP services here[^], but keep in mind it's preliminary -- I'm getting a lot of good feedback from each provider that I need to incorporate, which will resolve some of the "odd behaviors" I mention in the article at the end. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
-
Jeremy Falcon wrote:
I wanna read it!
:cool: I have a preliminary version comparing three NLP services here[^], but keep in mind it's preliminary -- I'm getting a lot of good feedback from each provider that I need to incorporate, which will resolve some of the "odd behaviors" I mention in the article at the end. Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)
Marc Clifton wrote:
I have a preliminary version
I won't tell anyone. :~
Jeremy Falcon
-
Marc Clifton wrote:
I have a preliminary version
I won't tell anyone. :~
Jeremy Falcon
Jeremy Falcon wrote:
I won't tell anyone.
:) Even if they find out, it doesn't matter. The repository is there so that AlchemyAPI, OpenCalais, and Semantria can give me feedback on how poorly I'm representing them. So far, Semantria is proving the most difficult to work with with regards to their API. Stuff is NOT clear. But check out their pricing. :omg: Who would want to touch that anyways? Marc
Latest Articles - APOD Scraper and Hunt the Wumpus Short video on Membrane Computing Hunt the Wumpus (A HOPE video)