Results 1 to 10 of 10
  1. #1
    Join Date
    Apr 2002
    Location
    Hollywood, CA
    Posts
    3,046

    php: using include(), then striping specific html

    Ok, heres the deal. Im working on a project to make cricket :http://cricket.sourceforge.net/ more functional. What im doing is including all the cgi files into one page for each machine so you have an overview of each machine and everything being graphed for that machine.

    Take for instance:

    <?php
    include (url);
    include (url);
    include (url);
    include (url);
    ?>

    Im wondering if it is possible to remove a portion of html out of them after i include them?

    Whats happening is that when I include each file it puts its footer like it should in between the graphs. I just want one footer but now i have about 20

    Can anyone lend a hand? Cricket is an excellent program and when Im done with this project I plan on adding it to the contrib so people can have a better way to navigate through their cricket graphs without having to click through 7 pages of links before they actually see graphs.
    Last edited by case; 06-08-2005 at 09:12 AM.

  2. #2
    Join Date
    Jun 2004
    Location
    Digital Texas
    Posts
    55
    try these
    php.net/striptags
    php.net/fopen
    php.net/fread
    php.net/obgetcontents

  3. #3
    Join Date
    Jul 2003
    Location
    Kuwait
    Posts
    5,099
    You can use output buffering to handle this.

    Something like this should work (haven't tested it):

    PHP Code:
    <?php

       
    // Define a callback function
       
    function cleanup($buffer)
       {
            
    /* Put your cleanup code in here
                code that will strip the html.

                $buffer is what is in the output
                buffer.
             */
       
    }

       
    ob_start("cleanup");
       require_once 
    'foo.html';
       
    $contents ob_get_contents();
       
    ob_end_clean();
       echo 
    $contents;
    ?>
    In order to understand recursion, one must first understand recursion.
    If you feel like it, you can read my blog
    Signal > Noise

  4. #4
    It's possible to remove a portion of the HTML code if you know the code where you want it to split. I have been working on a script myself to grab the bandwidth usage and MRTG graphs from cPanel and put it into my own script, and so far it's been successful all the way.

  5. #5
    Join Date
    Apr 2002
    Location
    Hollywood, CA
    Posts
    3,046
    if these files that im including are cgi files does it make any difference?

  6. #6
    Join Date
    Jul 2003
    Location
    Kuwait
    Posts
    5,099
    No, it doesn't matter what kind the files are. If you want the result of the files -- then you need to make sure to include them using url wrappers.

    include 'http://www.domain.com/cgi-bin/foo.cgi';

    not

    include '/home/domain/public_html/cgi-bin/foo.cgi';
    In order to understand recursion, one must first understand recursion.
    If you feel like it, you can read my blog
    Signal > Noise

  7. #7
    Well, I think it would basically be grabbing the HTML output through the browser, not the actual CGI scripting.

    I can't assure you, but there is a way.

  8. #8
    I actually made my script retrieve the HTML files from cPanel (logging in at the same time), then copying it to the server.

    Once the files were copied to another directory where they could be accessed publicly, I used 'fopen' to open the pages, grab the contents of the file and break it up into an array, and then I just printed the array that I needed

  9. #9
    Join Date
    Apr 2002
    Location
    Hollywood, CA
    Posts
    3,046
    Originally posted by Jason.NXH
    I actually made my script retrieve the HTML files from cPanel (logging in at the same time), then copying it to the server.

    Once the files were copied to another directory where they could be accessed publicly, I used 'fopen' to open the pages, grab the contents of the file and break it up into an array, and then I just printed the array that I needed
    I would love to do it that way but there is 14,000 dynamic pages and about 80,000 graphs. It took me 18 hours to spider all 94,000 urls. Another problem is these graphs change alot so i figured linking to them would probably be better then downloading them.

    Because everything is dynamic I dont think ill be able to grab the actual pages. The best part about all of this is that I thought I would be finished real quick with this but the scope of the project keeps changing and requires me to keep getting deeper in.

    The actual purpose of this project is to add menus kind of like what cacti has. That where I started... lol

  10. #10
    Hmm, sounds pretty impossible then Very tricky with so many pages, especially because they're dynamic

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •