Announcement

Collapse
No announcement yet.

Workflow setups / tips MivaScript environments?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Workflow setups / tips MivaScript environments?

    Hey guys,

    I've been tinkering with MivaScript quite a bit lately. And I thought I'd share some things that I've been doing to increase productivity. If you want, add your own, or feel free to improve/critique.

    I'll kick it off with Git:

    I use Git all the time. We all know that Miva Merchant has a built in revision control system, however that doesn't extend to MivaScript. For me, Git takes care of documenting what I've done, along with allowing for me save at specific points along the process.

    If you haven't used it, it's simple. You can check out a site like bitbucket.org, where you can have as many private Git repositories as possible.

    Once you get a repo set up, to get BASIC functionality up in running a couple of minutes it's a simple process.
    1 - Save your files
    2 - "Add" them to the staging area:
    Code:
    git add .
    3 - "Commit" the changes:
    Code:
    git commit -m "Added getRSAFromAttr() function"
    4 - "Push" the change to the repo.
    Code:
    git push
    Pretty simple stuff. That tracks the changes that you've done, at the time you did them, and gives you a point to revert back to.

    If you want to get enhance your "Git environment" you can use the .gitignore file and configure that. For me, I don't want my MVC files to be uploaded. So my .gitignore file looks like:
    Code:
    *.mvc
    That way, when I:
    Code:
    git add .
    It won't upload my compiled files to the repository, which is nice because they don't need to be there, and it just takes up time.

    You can also set up a global .gitignore file, so you don't have to add it to every repo that you create:
    Code:
    touch ~/.gitignore_global
    echo "# Global Git Ignore File" >> ~/.gitignore_global
    echo "*.mvc" >> ~/.gitignore_global
    echo "*.so" >> ~/.gitignore_global
    echo "*.tar" >> ~/.gitignore_global
    # and so on for whatever you don't want to include..
    echo "temp_*.*" >> ~/.gitignore_global  #For any files I just want to temporarily create, dummy data files, etc.
    
    git config --global core.excludesfile ~/.gitignore_global
    So, in addition to that, I also use the cURL library to help keep things running smooth.

    I wrote a couple scripts that I use to help keep things smooth for me.

    If any of you guys are unfamiliar with cURL, it's basically a URL framework written in C. It uses lots of different TCP/IP protocols, SMTP, FTP and more. In this instance, I use it for the FTP functionality.

    So, for my FTP functionality, I take advantage of the Git directory structure, which ignores the .git directory it creates after a
    Code:
    git init
    So, here are my two files:

    upload.sh
    Code:
    #!/bin/bash
    
    # Add ftp data from .git folder of current working directory
    . .git/ftp/dat
    curl -T $1 ftp://$OPT_1/ --user $OPT_2:$OPT_3
    ftp.dat
    Code:
    # This is the username and password for the FTP of this particular site.
    # For OPT_1, remember, we DON'T include the trailing slash!!!!
    # OPT_1 is the location on the server that we're going to be uploading to
    # OPT_2 is the username for the FTP access
    # OPT_3 is the password for the FTP access
    
    DOMAIN='ftp.example.com'
    #OPT_1 uses the mm5/5.00/modules/component as an example, use whatever directory you want to upload to
    OPT_1="$DOMAIN/httpdocs/mm5/5.00/modules/component"
    OPT_2='USERNAME'
    OPT_3='PASSWORD'
    ftp.dat goes in the .git directory of the repo you are working with.
    upload.sh goes anywhere in your $PATH variable.

    I just threw mine in my /MSC/BIN/ directory. That way I knew it would work.

    So, since I'm literally the only person who uses my computer, and since all of the password files are stored locally, I don't really see a security risk. However, if you want to get really clever, you can do a few other things..

    For example, if you're using FileZilla, and you're using the Site Manager, you can query the XML file (for Windows, C:\Users\USERNAME\AppData\Roaming\FileZilla\sitema nager.xml) that contains all of the username and password information, to get the proper credentials, without retyping them, or having to change files when a specific FTP account password gets changed. From there, you can query the servers, and pull out the pertinent data. Haven't gotten that working yet, but if you have, post them up.

    So, my typical workflow goes something to the tune of:

    1 - code, code, code..
    2 - mvc file.mv
    3 - upload.sh file.mvc
    4 - Do I like the results?
    5 - If yes:
    5a - git add .
    5b - git commit -m "Added new functionality to file.mvc"
    5c - git push
    6 - rinse and repeat

    So, some obvious areas of improvement are:
    A) a more dynamic upload.sh script
    B) query the sitemanager.xml file, and create a "setup" script to generate / regenerate username:password
    C) Look at creating a "compile_upload" script to combine steps 1 & 2 in the process
    D) Get Git integrated with Vim

    What are your tips?
    PCINET, LLC

    Miva Merchant Design, Development, Integration & Support
    We built the most Miva Merchant stores!
    Miva shopping cart design & integration service and our Portfolio!

    e-mail: [email protected]
    web: www.pcinet.com

    "We who cut mere stones must always be envisioning cathedrals."
    Quarry Worker's Creed

    #2
    Re: Workflow setups / tips MivaScript environments?

    Nice tuto. Thanks for sharing !
    Zen Radio : Relax :) : www.zenradio.fm
    MivaScript Tutorials & Scripts : www.mivascript.org

    Comment


      #3
      Re: Workflow setups / tips MivaScript environments?

      *Updated Workflow*

      Hey guys,

      I recently updated my workflow, and I wanted to post it here. The last post I had didn't get too much attention, which is fine, but I *really* think this stuff is useful, especially for developers with many clients / servers.


      Previously, I talked about Git, and how I integrated that within my workflow. I'm still doing that, as I think it's incredibly important. Both for maintaining code revisions and documentation, and for maintaining various branches of the code.

      What I have done now, is integrated XML, XSLT and some more bash scripting into my operation.

      My build environment is Windows 7 with Git/Cygwin installed. This will work from a *nix based OS as well (Git, and the programs are actually originally from Linux).

      I use FileZilla for my FTP program, as well as cURL for uploading individual files.

      The workflow is pretty much agnostic of the development environment, so things like Notepad++, TextEdit, TextMate, TextPad, Vi, Vim, emacs, etc. are all fine to use. Some have Git integration, while others don't.

      I installed a program called xsltproc to process my XSL template.

      This page (http://www.sagehill.net/docbookxsl/I...Processor.html) contains the necessary information to get xsltproc up and running on a Windows system.

      For xsltproc I put the necessary .dll files in the BIN folder of my mivascript compiler. This was just because I knew it was in the Windows path environment variable already, and I didn't have to worry about updating things (Keep It Simple, Sir).

      For FileZilla I use the Site Manager, which I'm assuming most of you guys do too, if you use FileZilla. FileZilla has an XML file called "sitemanager.xml" which contains all of the username and password information for your FTP servers. What I wanted to do, was write a simple script which would allow me to pull the information from sitemanager.xml, and allow me to upload a file to a specific client's site, without much trouble.

      So, I wrote an XSLT file, updated my upload.sh file and modified how the ftp.dat file worked.

      Here are the working parts of the system:

      FileZilla Site Manager --When I have my client's sites in the Site Manager, I use the "Comments" section to tell me where the Miva Merchant installation is located. If it is in "/httpdocs/mm5", then I leave it blank, as that is the default location. But if it's in "/devel/mm5", then I'll put that (and only that) in the comments section of the Site Manager.

      ftp.dat --This file is located in the .git folder of the current repo that I'm working in.

      It contains a single variable now: END_PATH_LOCATION

      END_PATH_LOCATION is the location on the server where the file is located. Now, let me be a little more specific -- it's the location relative to the Miva Merchant installation on the server where the file is located. Why is that important? Because some of the installations are Merchant5, mm5, Merchant2, merchant2, or something completely different. But we don't want to have to remember that much, we just want to be able to upload and be done.

      So, here's an example of a module which resides in the util directory.

      Here's what the file should look like:
      Code:
      # For END_PATH_LOCATION, remember, we DON'T include the trailing slash!!!!
      # END_PATH_LOCATION is the location on the server that we're going to be uploading to
      END_PATH_LOCATION="5.00/modules/util"
      So, if my directory structure is like this:
      Code:
      module_name/
      module_name/.git
      module_name/includes
      module_name/scripts
      My ftp.dat file would be located at: module_name/.git/ftp.dat

      The upload.sh script is the next part. This is simple too. It has two forms of usage: 1 parameter, and 2 parameters.

      The 1 parameter usage is for when you want to upload to a default server. Say you have a development server that you're using for instance. This will call the xsltproc program, and get the proper cURL parameters and execute the result.

      So, the first syntax would be something like:
      Code:
      upload.sh example_module.mvc
      You simply tell it the name of the module you want to upload (that's in the same directory as you are currently in), and it'll upload it to a pre-defined default site.

      The 2 parameter usage is when you upload a specific file to a specific server.

      That looks like this:
      Code:
      upload.sh example_module.mvc "Foo Bar Dev Site"
      Where "Foo Bar Dev Site" is what you NAMED the server in your Site Manager in FileZilla.

      The upload.sh file has two variables:

      XSLT_LOCATION - This is where the xsl file is located (we'll get to this next)
      SITE_MANAGER_LOCATION -This is where your sitemanager.xml file is located.

      Here's what that file looks like:
      Code:
      #!/bin/sh
      # $1 - File Name
      # $2 - Site Name (As you've named it in your SiteManager)
      # $END_PATH_LOCATION - The location of the upload
      # Look into possibly getting the cwd, and uploading to that from a 'base'
      . .git/ftp.dat
      XSLT_LOCATION="/d/PCINET/scripts/xml/select_server.xsl" 
      SITE_MANAGER_LOCATION="/C/Users/Tim/AppData/Roaming/FileZilla/sitemanager.xml"
      
      
      if [ $# == 1 ] ; then
         xsltproc --stringparam file_name "$1" --stringparam end_path_location "$END_PATH_LOCATION" $XSLT_LOCATION $SITE_MANAGER_LOCATION | sh
      elif [ $# == 2 ] ; then
         xsltproc --stringparam file_name "$1" --stringparam site_name "$2" --stringparam end_path_location "$END_PATH_LOCATION" $XSLT_LOCATION $SITE_MANAGER_LOCATION | sh
      else
         echo "Usage (1) is: upload.sh FILE_NAME SITE_NAME"
         echo "Usage (2) is: upload.sh FILE_NAME (this will upload to the default directory)"
         exit 1
      fi

      The next part is the XSLT file: select_server.xsl
      Code:
      <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
        <xsl:param name="site_name">DEFAULT_SITE_HERE</xsl:param>
        <xsl:param name="file_name"></xsl:param>
        <xsl:param name="end_path_location"></xsl:param>
        <xsl:output method="text"/>
        <xsl:template match="Servers">
          <xsl:for-each select="//Server">
            <xsl:if test="Name=$site_name">
              <xsl:choose>
                <xsl:when test="Comments !=''">
                  <xsl:value-of select="Comments"/>
      curl -T <xsl:value-of select="$file_name" /> ftp://<xsl:value-of select="Host"/>/<xsl:value-of select="Comments"/>/<xsl:value-of select="$end_path_location" />/  --user <xsl:value-of select="User"/>:<xsl:value-of select="Pass"/>
                </xsl:when>
                <xsl:otherwise>
      curl -T <xsl:value-of select="$file_name" /> ftp://<xsl:value-of select="Host"/>/httpdocs/mm5/<xsl:value-of select="$end_path_location" />/  --user <xsl:value-of select="User"/>:<xsl:value-of select="Pass"/>
                </xsl:otherwise>
              </xsl:choose>
            </xsl:if>
          </xsl:for-each>
        </xsl:template>
      </xsl:stylesheet>
      What I need to do, is make sure that the location that I save this file to, is the same location that's defined in my upload.sh script for SITE_MANAGER_LOCATION.

      When I have all of that set up (which takes only a couple seconds), I can now easily query the sitemanager.xml file, pull the current username and passwords for a client's FTP server, and go about my way.

      This is what I've been wanting to do for a while now, and I'm glad that I did it. It has given me more time, and allowed for a more flexible environment to develop from.

      What are some things I can improve on?

      NAS Integration
      If I set my sitemanager.xml location on my NAS, then I can use the script from any computer on my network without having to replicate the same data on each computer. Makes working from different computer's easier (especially if you're working in the evening while watching Doctor Who with your son).

      Better Awareness of Folders
      One of the other things I want to do, is figure out how to get the script to look up the directory structure, back to the "parent", to find out where the .git folder is located.

      If I can do that, then I can get it set up to do an "upload.sh" from anywhere within a specific repo, get the root location, and upload the file to the same location on the directory of the server.

      Currently, the implementation is limited to files that have .git/ftp.dat as a child-folder of their parent folder.

      So, if I have a structure like:
      Code:
      /example_module/.git
      /example_module/.git/ftp.dat
      /example_module/scripts/
      /example_module/scripts/common.js
      And I wanted to upload common.js, I would have to either do it from the /example_module/ directory, and specify it relatively, or create a .git/ftp.dat file in that folder.

      I would seriously love to hear some feedback and critiques on this. I know my bash scripts probably look like they were written by an 8th grader, but I don't write them all that often. And my XSLT template is probably hacked together too, but it was my first time writing with that as well.

      Again.. let me know! I don't know if anyone else is interested in this stuff, but I wanted to share it.
      PCINET, LLC

      Miva Merchant Design, Development, Integration & Support
      We built the most Miva Merchant stores!
      Miva shopping cart design & integration service and our Portfolio!

      e-mail: [email protected]
      web: www.pcinet.com

      "We who cut mere stones must always be envisioning cathedrals."
      Quarry Worker's Creed

      Comment

      Working...
      X