Quantcast

SES – Organic Listings Forum

Get the WebProNews Newsletter:


[ Business]
Pose questions to our panel of experts about free "organic" listing issues, plus participate in this session that allows the audience to share tips, tools and techniques. There’s no set agenda, so this is an ideal session to discuss any major recent changes with organic listings.

Moderator:

  • Danny Sullivan, Conference Co-Chair, Search Engine Strategies San Jose

Speakers:

  • Bruce Clay, President, Bruce Clay, Inc.
  • David Naylor, SEO, Bronco
  • Todd Friesen, Director of Search Engine Optimization, Range Online Media
  • Jill Whalen, Owner, High Rankings
  • Mike Grehan, Vice President, International Business Development, Bruce Clay, Inc.
  • Greg Boser, President, WebGuerrilla LLC

The Organic Listings Forum session started sharp at 9 am and Greg Boser and Dave Naylor were both seen wearing sunglasses.

Here is the Q & A

Q: I was just curious about proximity of content and the source code being weighted heavier based on perceived importance. Do you think it really matters anymore? Do you think the engines can determine where the content is from the source codes?

Bruce: I’ve run experiments on my own site and moved code up and moved code down. I haven’t seen it impact rankings at all. But if you repeat the same stuff on the top of the page, we recommend that you take it off or reposition it. The table trick is something you can implement it (search for it on Google).

Followup: Can you expand upon externalizing scripts and its ability to

Greg: If your content is not towards the top, you’ve probably built a crappy website.

Dave: If you keep it really simple, keep in mind that spiders are stupid things. Don’t put a gazillion links there.

Q: I have a text driven site and I have dynamic pages that I need to optimize to get into the top 10 in most of them. I was wondering if there’s significant advantage of CSS over tables and if I should take that fight to my IT department.

Greg: Yes, you should, only because that’s how the web progressive and that’s how we roll these days. I don’t think there’s an SEO benefit but I think it’s important to follow and maintain some of those standards. If your website is using the font tag, that’s bad becasue it’s deprecated. Can you make an argument that you can rank way better? Not really. I wish that search engines did reward valid code but they don’t.

Dave: Way back, if you had a lot of elements inside of your table, the page wouldn’t render until everything loaded. It’s about user experience too.

Jill: The bottom line is that it’s not going to affect your SEO. If it’s a big deal to revamp your site, don’t do it.

Bruce: When I redid my site, I switched from tables entirely to CSS. I also made it W3C compliant. That may be something that emerges. I was moderating a panel at adTech and Google said that the cleaner the code, chances are the search engines will get a better idea of what your site is about. From that point of view, go to CSS because it’s simpler.

Q: I work for a medical publisher and we’re trying to make as much money as we can off our content. We put scientific articles up and try to sell them. We want to get indexed. I’m facing a problem with duplicate content because we cater to different environments (hospitals, education, etc.) and want that audience to see it in the results. How do you convince the indexers that they want to get it more than one time?

Todd: That’s going to be problematic.

Greg: Here’s the thing: it won’t work. I want things like that too. In the big picture, the engines are really good at duplicate detection. An example is the AP publishes the same story on hundreds of websites verbatim. When you do a search, only one of those websites show up in the results. That’s based on trust. If you see the same article over and over, it’s poor user experience. The question is: can you leverage the experience in other ways? We do something called conditional redirection (robots.txt file on steroids). We can redirect pages to one central location and you get the benefit of all random links to pages that won’t rank anyway.

Dave: If I just take your content and put it on my higher ranking website, whose website are they going to choose?

Followup: All my stuff is copyrighted.

Todd: That doesn’t mean anything! (Sad truth.) Having multiple copies and rank one version for one market and one for another market won’t work.

Followup: Can you tell my boss that?

Dave: They are all about most relevant.

Bruce: How many here syndicate content? This is the exact problem syndicators have. Some people get their content ripped off by affiliates. Your content is going to be indexed on the page that is most relevant to the query. The site that ranks highest is the one that has the highest authority. I had a client (Edmunds.com, the car guys) who would write their content and AOL would copy it. AOL would rank and not Edmunds. This took a lot of effort to straighten out.

Jill: The simple rule is one URL for any piece of content. That’s your best bet.

Q: We have an e-commerce site in the states and we want to launch in Europe. We want to host the sites in the US. Is there an issue with that?

Todd: The domain extension will do well for you. If it was sitting on a .com, then you need to start playing around with Ips.

Mike: You do have to have TLDs for a particular country but I’ve found that being hosted there helps as well.

Greg: The difference is also duplicate content in the US, UK, South Africa, and Australia. Be careful. Also, Google Local favors mobile content more than in the past. If at all possible, use the TLD or the IP that tracks to that country.

Mike: We’ve had a number of issues where it’s not possible for the client to host in their countries. But the best advice is to host in the other countries. Sometimes this isn’t feasible financially. I’ve said to Matt in the past that it would be a great idea to add a tool into Webmaster Central to put in an option where you can specify which country you’re targeting.

Greg: You should set up a reverse proxy. You can also do multiview DNS which is cloaking from a DNS level: you give a different IP based on who is trying to resolve the DNS. That can make the engine believe you’re in a different country that you’re not in.

Q: I am doing a good job at getting ranked on Google, MSN, and Yahoo, but I can’t figure out why I don’t rank on Ask.

Dave: Ask is a bit weird. Ask looks at communities and themes and areas, so you need to make sure that the authoritive sites in your industry are linking to you.

Todd: In the paid link panel, the big argument was that paid links are all bad because they cannot determine their relevancy. So people bought thousands of links on blog networks. It didn’t make sense. Ask really understands this; they really understand the relationship of different communities. I wouldn’t worry about it though. Let’s wait for them to come out with a new algorithm.

Mike: The original algorithm is subject specific and creating communities. There’s an algorithm based on PageRank that is keyword independent. There’s a keyword dependent algorithm as well. But I tend to find that the subject matter is really important.

Greg: Their search doesn’t scale. Several years ago, someone asked us – "How do you spam our engine?" And I said to them, "I’ll tell you as soon as you bring traffic."

Jill: They don’t bring traffic, so don’t worry.

Danny: Ask’s big thing was "when you do a search with us, we’re going to take a collection of documents that match the query you look for and we’re going to look at keyword relevance and look at the linkings within the documents to get relevance." To me, Ask gets funky because of the way their ranking algorithm.

Dave: Ask prioritizes the way that they spider – if you don’t have a robots.txt file, you go to the bottom of the list. Even if it’s empty, you’re at the bottom of the list.

Q: Our website is teardown.com and we disassemble electronics and we write competitive intelligence reports. We rank well for teardown, assembly, etc., but when we come out with a new report, we don’t rank highly that quickly after we publish a report.

Greg: The biggest threat to SEO is the CEO. I suggest you log into his account early in the morning and personalize the results so that he sees the rankings very highly.

Followup: I can’t do that because he stays up all night.

Greg: You should implement an RSS feed because it will attract Google’s bot for blogs and that has a lot to do with news search. It helps get you spidered a lot quicker.

Dave: Just put a blog up there with a blog footprint. It will rank you much quicker: an hour, within a day. If your CEO wants to see a new report and ranks immediately, ranks will help.

Mike: I read a whitepaper a few weeks ago about how search engines are able to rank news results faster from looking at RSS feeds. I think that generally speaking, if you have newsworthy content, you need an RSS feed.

Todd: You should also consider press releases.

Jill: How do you link to it when you put it out there? Is it easily accessible from your main navigation?

Followup: We put it on our main page and then put it on the content page.

Jill: That should help.

Bruce: We blog the conference and we actually do it in a pretty much live mode. Every one of our blog posts are spiderable within 15 minutes of being posted. (Hi Lisa!)

Q: I wanted to find out if DMOZ is a player anymore.

Jill: Submit and forget.

Followup: How do you get your listings out of there?

Greg: Just ignore it.

Jill: You can use your noodp tag to get your description out of it.

Followup: Why is Google still using it?

Danny: Google is using their directory but nobody goes to it. They aren’t dropping it because if they did, there’d be anger about how Google is dropping open source. So they have it. But the noodp metatag lets you stop using the title in your Google results.

Mike: The main reason why they use those directories is because it’s a directory with a human element. But nobody outside the SEO community knows what DMOZ is.

Bruce: Don’t worry too much about directories. Last year, I had a half a million unique visitors and last year only one visited me from the Yahoo directory.

Q: I have a client who has tens of thousands of pages on their website and they publish fresh content every day and they didn’t do a good job with sitemaps or 301s. The problem is that Google is removing some of these old listings, but Yahoo doesn’t flush out this old content. Do you have any tips for removing the old content?

Todd: Within SiteExplorer, they have a facility where you can instantly pull URLs out of the index.

Greg: Buy Tim Mayer something and ask him to fix your stuff.

Q: I’m a jewelry seller. I have very unique content and the site is optimized fairly well. I can’t figure out why I’m not ranked. I think that I’m being buried now because sites like Amazon are duplicating my content. What do I do?

Greg: That’s it. When you syndicate stuff, that’s a risk you’re taking. Amazon will always win over your site because it’s more trusted. They’re the only e-commerce site left in the world that ranks. That’s the downside of syndication.

Todd: If you’re going to syndicate content on that level, syndicate a different version of your content.

Q: We’re trying to protect our copyrighted material and we’re trying to put information in our PDF that says we’re the authoritative owner of the site. Does the search engine care?

Todd: No, that’s just a link.

Greg: Googlebot is very stupid. They take the content and throw it in a pile with the rest of the data on the Internet. They don’t rely on any input on you, the webmaster, because we all lie, cheat, and steal. They try to use as little signals as possible provided by you so they focus on authority, PageRank, etc.

Dave: When you think about it, Google doesn’t know that you’re lying or not.

Bruce: There’s an actual tag in HTML called the quote tag. It’s supposed to specify the authority of the source for a specific quote. We’ve been ramping sources in quote tags to point to the original content. Even though there’s no proof that anyone pays attention to that HTML, but as part as an overall project, that seems to have helped me. The only assumption here is that the people who duplicate your content actually points to you and uses that tag.

Danny: Your pain is well understood and shared by many people. It’s frustrating. We’ve waited many years for this but they’re focused on video copyright theft right now. All those issues on YouTube now are applicable to webpages. Aaron Wall had a good rant where he poked at Google and said they don’t care about copyright. The good news is that a lot more people are being vocal about duplicate content, so maybe we’ll get better tools in the future to verify the original source of the information.

Q: Bruce, you talked earlier about experimenting on your site with techniques, and I think that most of us do this. But do you recommend setting up a really clean test environment? If so, are there any tips?

Dave: There’s no such thing as a really clean test environment. If you’re going to do it, put it on a domain that isn’t worth keeping. Don’t do it on a quality domain ever.

Jill: If you’re not trying to push the envelope, use any blog and test how many keywords are indexed in meta descriptions, etc. You won’t get in trouble for that and you can learn a lot. That’s what I do.

Mike: You can reduce the risk by dealing with an affiliate (webmarketingnow.com)

Greg: The hardest thing is replicating the factors. When you do research and development, you can’t replicate authority and trust. You have to test specific theories. In the old days, we can rip government sites and do numerical find and replaces for common words (we’d replace the word census with 19427) and then we can find numeric combinations to find keyword density, etc. because there were no competing pages. We looked at thousands of factors. But you can’t draw conclusions from what you see anymore.

Todd: We run SEO for about 28 different brands and we get to look across 28-29 different types of websites in different verticals, but it’s very hard to do tests.

Bruce: We did simple tests. I own a lot of URLs. Nobody links to many of these and I have to put in content and then test it, and then I have to take the site down and wait for the data to be de-indexed so I can test again. You don’t want the first test to bias the second test or the third test.
Dave: In different industries, there are different quality signals in these industries. Consider the pharmaceutical industry. One test may work well for one industry but not for another.

Q: I’m looking for a tool to understand the optimization of my site. Do you have any tools that you’d like to share with us?

Todd: WebmasterCentral helps. You can use tools to find broken links.

Bruce: If you search for "free SEO tools," you can get 132 free tools. I think most of us have proprietary tools that we’ve written.

Q: I have worries about pages being scraped over the years to the point where snippets of your page are all over the web.

Greg: I apologize for that.

Followup: We always beat somebody who scrapes the whole page. But I’m beginning to see that it’s almost as though we’re being treated occasionally badly. Have you seen that?

Greg: Some industries are powered almost 100% by scrapers. Most of them will leave your links intact. You can build backlinks from this.

Followup: Most of these people are not taking everything. They’re taking snippets. Using a tool like Copyscape helps us find them, but they’re not in Google.

Greg: Google is doing a pretty good job. They’ll let you put AdSense on it though (laughter).

Dave: That’s actually changed. (Yay!) Google has gotten really clever about scraping data.

Q: We’re considering using a content management system and I’m worried about using iframes. Is that a problem?

Todd: There are many CMSes: you want to have good URL structure and no iframes.

Dave: Most SEOs use WordPress for their blogs. Check how to optimize WordPress.

Jill: You want to be able to customize your title tags and meta descriptions as well.

Dave: Does it use session variables? Those should be crossed off the list.

Todd: Look at people who use these CMSes and see how they’re being crawled to see if they are good. We have a client who pays about $500k a year to license the CMS and it propagates the same title over the entire site, so price isn’t a good indicator.

Q:
We’re finding that we’re getting a lot of referrals from Google but Yahoo and MSN are not close. Do you find that there are issues with those engines?

Greg: There are demographic differences in the engines. It’s not always about volume. Benchmark if it’s a ranking issue or if it’s just because nobody uses it. The Yahoo index indexes everything really well but doesn’t rank it very well.

Bruce: It’s very specific to industry on Yahoo. Equally optimized sites may rank differently in different engines. There’s no real way of getting a site to rank everywhere without putting in a fair amount of effort.

Todd:
MSN does have deep crawl issues. They admit to that.

Dave: I see that all the time. With Google, the amount of backlinks will determine how deep they let you index it. But in Microsoft, there’s no indicator to determine how deep they should go. Yahoo is a weird one because they can go crazy and end up with 10x the amount of pages in Google. I don’t know.

SOURCE: SEARCH ENGINE ROUND TABLE

Comments

SES – Organic Listings Forum
Comments Off
About Navneet Kaushal
Nav is the founder and CEO of PageTraffic, a premier search engine company known for its assured SEO service, web design and development, copywriting and full time SEO professionals.

Navneet has wide experience in natural search engine optimization, internet marketing and PPC campaigns. He is a prolific writer and his articles can be found in the "Best Articles" section of many websites and article banks. As a search engine analyst , he has over 9 years of experience and his knowledge is in application here. WebProNews Writer
Top Rated White Papers and Resources

Comments are closed.

  • Join for Access to Our Exclusive Web Tools
  • Sidebar Top
  • Sidebar Middle
  • Sign Up For The Free Newsletter
  • Sidebar Bottom